url
stringlengths
36
73
transcription
stringlengths
31
481k
title
stringlengths
8
99
duration
int64
95
31.1k
uploader
stringlengths
4
44
upload_date
stringlengths
8
19
description
stringlengths
0
4.04k
datetime
stringlengths
26
26
https://www.youtube.com/watch?v=example
This is a sample transcription.
Sample Title
123
Sample Uploader
2024-06-10T12:34:56
Sample description.
2024-06-10T01:34:31.859268
https://www.youtube.com/live/Brwhbjh3boU
Hey, fun fact. Guys, did you know that max sequence input length is actually not the same as the context window? Wiz, Leo, isn't that right? Yeah. Pretty true, Greg. Yeah, yeah, yeah, yeah, yeah. Fun fact, and we will dive into all of the details that we need to go long on context today. You guys ready to get into it? Let's go. Let's do it. Ready. Yep. All right. We'll have you guys back in just a sec. Today, we go long on context with the chief scientist from Gradient, Leo Pichelis, and of course, the Wiz. Today, we're talking about exactly how these context windows should be thought about, how we can make them longer, what this requires, and how it's happening in real time. And of course, once we have the long context window, how do we know that it's actually performing well? How can we evaluate the long context window performance? So we're going to set this off by providing some context. Then we're going to talk about what goes into actually creating an off-the-shelf model that then gets expanded with its context window. And we're going to talk about some of the popular frameworks for evaluating long context window LLMs. We're going to talk about some of the popular frameworks for evaluating long context window LLMs. So I want to sort of first provide a little background here, a little context. In April, of course, Meta released Lama 3. And what was so interesting is when they put out the blog, they said, well, we're not going to do the long context window for a little while. We're working on it. We've got a bigger model. We've got some longer context versions. And everybody said, man, it's only 8K input sequence length. I don't know. I'm really left wanting with this small amount that I can put into this thing. And the race was on in the industry. Gradient, friends of ours, were instantly off to the races, crushing it. Somehow, within just a few days, working maniacally over the weekend, we understand, released 160K context length version of the 8 billion parameter model before they then released a few days later a 1 million context length version of the 8B. And then they released a 262 context length version of the 70B before they just went straight buck wild on May the 4th with their 1 million context length version of 70B. They've actually released a 4 million length version of 8B now, and it's really just kind of hard to keep track of all the cool stuff that they're doing so i wanted to sort of get leo up on stage to tell you a little bit about what this has been like behind the scenes at gradient so leo i mean how did you guys pull this off before meta and so quickly what's it been like down in the trenches and like what should we expect to see from you guys next yeah yeah thanks um it's definitely been a a lot of work kind of like as you mentioned and you know a number of uh nights and weekends i think a a lot of it is being in the right place at the right time um we were coincidentally already working on some long context stuff before Llama 3 dropped. We were working with our compute partner, actually, Crusoe, and they had just spun up, I think it's a thousand GPU L40s cluster. And we were kind of like throwing the ball around, trying to figure out uh what's something uh really interesting that we could work on together and we settled on on long context right i think over the past few months like really just this year people been uh putting out long context models there was like the large world model uh maybe like a month or two ago up to like a million context uh google gemini has been talking about it too um and so we were coincidentally working on some stuff um and it was kind of the perfect storm too because it wasn't just us it was kind of like the whole open source community uh there's this great easy context github repo out there like again literally like released maybe like two weeks or three weeks before llama 3 and so it we were in the right place at the right time working on the right stuff heard about llama 3 dropping and we're like great let's let's see how it would work and it all kind of just fell together it wasn't much of a pivot then it was just sort of like an acceleration of some of you guys were already working on behind the scenes? Yeah, yeah, exactly. I mean, we were mostly working on like Lama 2 architectures. And so like there's a little bit of like a hope and pray that all of this would just kind of work for Lama 3 and like credit to Meta and their team, it did. It kind of worked flawlessly. Nice, nice. So, I mean, you know, we saw the 4 million input sequence length. What's next? What are you guys working on now? And I mean, how long are we going here exactly? Yeah. I mean, you could potentially take it as long as you want. But, you know, at a certain point, there's diminishing returns, right? Like 4 million context length. I think it's something like 1.25 tokens per word or something like that. But anyways, at like 4 million, you've got like the whole Harry Potter book series a number of times over. And you figure at the end of the day, Gradient serves enterprise customers, right? Like we build agent systems for enterprise company. And something like four or five million is pretty much kind of like where the demand that we've been seeing is. And so I think probably the pivot from here is making those models better. I think in that direction, there's tons of room to improve, both on kind of like evaluating and figuring out, well, okay, it can like read long context, but maybe you want it to like reason about long context. And then also just working on the applications, right? I think we're just barely scratching the surface of all the cool stuff we could do with it. Yeah, yeah, yeah. I like this idea of sort of enterprise and they have sort of a kind of context length that seems to be sort of this diminishing returns point. This might be an interesting place for us to come back to in our discussions today. And then, you know, this idea of exactly what better means, I think we'll cross that path as well. Thanks for giving us some initial context, Leo. We'll have you back in just a little bit to keep the discussion rolling. For now, what we're going to talk about is a little background. And it's important to understand sort of this idea in general of in-context learning is kind of fundamental. This came out with the GPT-3 paper called Language Models or Few Shot. And it's it's all in the figure caption right here. Larger models make increasingly efficient use of in context information in the previous GPT one and GPT two paradigms long time ago now, but they were really requiring a lot of this fine tuning. And what we saw with the in-context learning enabled GPT-3 is that if we just did zero shot, that is instruction only, one shot, meaning one example of the kind of thing you wanted to do into few shot, especially at large enough model sizes, we're getting very, very, very performant results across many tasks. And in fact, if you've watched our channel before, you've noticed that we oftentimes talk about RAG as fundamentally requiring this in-context learning piece. And, you know, it's a bit jargony to even say in-context learning. What we're really talking about is we're talking about using the prompts, right? We're talking about doing some prompt engineering, using the input to the LLM, giving clear and specific instructions. And of course, we can talk about providing a context during our prompting. We can say, hey, you're a specific role. You should act a specific way. You have a specific feeling. But more importantly, when we want to fact checkable things, we're oftentimes saying, hey, we actually want to worry about this retrieved context piece. Because, of course, as we move from prompt engineering into RAG, we're sort of optimizing the context. That's fundamentally what we're doing. And as we go from zero shot to one shot to shot, to many shot learning, we're just giving more and more examples. These are sort of task-specific things. But in RAG, we're kind of giving more reference material. In either case, we're sort of giving the LLM access to knowledge that it wasn't really trained on in the first place. And the way that this works in a retrieval augmented generation system is we ask a question, it gets converted to a vector representation, and we've stored all of our potentially relevant data locally in a database called a vector store. locally in a database called a vector store. And then we look for stuff that's similar to our question. The stuff that we find that's similar to our question, we put and inject directly into our prompt using a prompt template that says something like use the provided context to answer the user's query. You context to answer the user's query. You may not answer the user's query unless there's specific context. If you don't know the answer, say I don't know. And then what we return is we return natural language directly to this context. And this retrieval process is the R in RAG, but this augmentation that we're doing to the prompt, that's in context learning. And so the RA into the G that is finally the generation, that's really the whole process here to get our answer. Now, in terms of what we're talking about doing now is we're saying, well, what if we just had a question and then we just copy pasted the entire Harry Potter series a couple of times over, right? Then how about just like, bam, let's just shove it into the model and get our answer. This is kind of the long context approach. And you might say, well, why do people want to do this? And I mean, clearly it's a no brainer, right? Because it doesn't get easier than doing something like this. So let's bring our experts back up to the stage here to have a little discussion about the big question that we are constantly getting here, guys. Why do we even need RAG if we have long context windows? They're going to kill RAG, right? Let's start off with that. Leo, what are your thoughts on this? Yeah, I mean, I don't think RAG is dead if you get long context, right? But I do think it looks a little bit different. You know, like one of the things that RAG really depends on is you got to pull out the right information. And that's why, like, people are working on a lot of, like, re-rankers. Like, you pull out more information than you think you need and then you kind of, like, sift through it. That stuff is, like, pretty complicated. And like, isn't it great if you could just pull out more information than you need throw it all into the model and let it kind of like figure out what it uses and what it doesn't use um and i think like especially the part that like rag is like pretty it would be pretty hard to achieve with just rag is when the pieces of information are interrelated, right? It's like, one example is say, and it's kind of a silly example, but, you know, maybe in one document, you have people's birthdays, right? Like maybe Sally's birthday is May 22nd. And in another document, you have Sally went to the store and bought like apples and oranges. And you ask the rag, what does someone whose birthday is in May, like buy at the store? Because those two pieces are interrelated, I think it would take an LLM multiple queries, but if you had it all in the context, it'd be just like, well, apples and oranges. So I think like that kind of interrelated stuff is really where the long context shines. Yeah. Yeah, so this sort of long range interrelations and correlations, I mean, I think this is something you've thought quite a lot about all the questions we get from students was, and maybe you can kind of comment on this quote that I took directly from you here and to kind of color the discussion if we built a perfect retrieval system then we wouldn't need long context is this true or false in your opinion and talk a little bit about why you think so well i feel like because i said it i'm gonna have to say that it's true uh The idea, I think, is right. If we have perfect retrieval, then we always get the information that we need to answer the question correctly. So we don't really need long context. We don't necessarily need long context, assuming that the context we need to answer the question isn't super long, right? And for most questions, that's going to be reasonably true. But that perfect retrieval system is not a trivial problem. Let's like, you know, be a little silly about it. But it's a very, very difficult problem, in fact. And I think, you know, long context can help us fudge that perfect context by just including more information so that we're more likely to have the correct context when we're looking to do anything with that context. Okay. To qualify a bit. That's what it is. Yeah, yeah, yeah, yeah. I mean, like, so this is clearly a hot clearly a hot take. Now I want to bring in, Leo, you mentioned enterprise earlier and you mentioned this four to five million length. I mean, enterprises got to be asking themselves this question. How are you thinking about providing enterprise the best solution, you know, that sort of balances great retrieval with the right amount of context are you typically leveraging both of these tools uh in any given use case uh maybe just tell us a little bit about um something that uh you know maybe you can tell us about right no happy happy to to share uh i know people people might get mad at me at work afterwards, but happy to share whatever. Yeah, I think the one thing I was going to say about the perfect retrieval is I think you can think about long context as a way to get at perfect retrieval. I think that's kind of what you're getting at as well um you know it's like i always like to think about analogies and like for us like in the human world and you know i could ask um i don't know like like a like a researcher or like a lawyer or someone to find me the yeah maybe like lawyer right like ask them to find me like the best counter argument for this particular case and and yeah like what they would probably do is like read through like a whole ton of different like prior cases find out like the few examples that are really worthwhile and like compile them and and return them right and like that one piece of information they return might very well be pretty short context right you know lawyers get get paid bait by the hour it's probably you know worthwhile to a lot of folks to to have it be a concise summary but there's a ton of work that went into that right and and so maybe if you have the llm go through all those pages and pages of briefs to to end up with that perfect retrieval that's that's like one way to do it. You know, you were asking about like the cost considerations for enterprise and totally that's like something to think about because I think the trade-off that you're making by using a long context window is it is probably more costly than doing a RAG query. You know, it takes a certain number of GPUs. I think on eight GPUs with an 8B model, we're getting up to 500K or maybe 600K context. Anything past that, we got to get beefier GPUs, we've got to get more of them. And so, you know, for any kind of like where the rubber meets the road system, you got to think about ROI, right? So, you know, maybe you don't want to throw like the whole internet or like, you know, Harry Potter five times over into the context, maybe some of it you want in the rag and some of it you want in the long context. Okay. Okay. Okay. So long context then sort of in spirit is kind of moving towards this idea of a perfect retrieval system. Something like, you know, we were chatting the other day about, Wiz and I were chatting about kind of like whether using semantic chunking or you're using a hybrid retrieval mechanism or you're using re-ranking you're sort of trying to figure out what is most relevant in the context right and when we do the assessment with tools like rag assessment we're sort of saying we don't want very duplicative information we don't want uh you know stuff that's going to be sort of just copy paste of the sameness, but we also want sort of rank ordered relevant information. I mean, you know, we talk about the problem RAG is solving as sort of being fact checkable. Leo, do you think that's the same problem long context is solving or can we sort of state it a little bit differently? I mean, are we kind of moving in the same direction with these two tools? Yeah, I think it's definitely moving in the same direction. I mean, fact checkable is such an interesting term because if you think about like all of the data that a language model is trained on it's it's trained on facts right it's trained on information that's out there um but like you know hallucination the model maybe is is extrapolating or it's like connecting pieces of information that maybe it shouldn't be connecting right and and all of this kind of like learning is trying to trying to teach the model both about new facts and how it maybe like shouldn't connect pieces of information that it has. And so like fact checking, I think you can totally do with long context because the facts are in the context. And I think there's been plenty of research to show that kind of like you mentioned in context learning, way more sample efficient than trying to pre-train or fine tune the model, right? And so that can be kind of like your last step fact checking that is very sample efficient, I think similar to RAG. I think similar to RAG. I think one of the things that I like about putting more and more into the language model is this kind of principle of just getting out of the way of the model and letting it learn, right? Like there's something beautiful about the fact that you have a very sophisticated AI model. You feed it examples and information, and it is able to synthesize together how it retrieves information or how it ranks pieces of data versus necessarily needing to code up the algorithm by hand, right? Like let the model figure it out. So one of the things we did before is we sort of said rag versus fine tuning. And it's sort of like you need both to kind of get to this sort of production grade LM application. Like, is it similar here with rag and long context? Like, what's at the end of this tunnel? Is this an illusion of choice or is it real? You know, Wiz, give me your take on this. And then we'll go to Leo before we kind of wrap this rag versus long it's just so tough to grok this sort of like when do I use either exactly you know and I think it's the new hottest question out there yeah I mean I mean uh for me this question is is pretty easy. It's both. I think that there are times when we're going to be able to use, you know, RAG and long context to make sure that we're doing, you know, getting everything that we need or up to everything that we need, especially if we're thinking about kind of API-based solutions where we're not actually needing to host those long context models, right, especially if we're thinking about kind of API based solutions where we're not actually needing to host those long context models, right, so that the flexibility of the context window is almost explicitly free for us, right? We're only paying if we use it. And especially if we're, you know, we're thinking about RAG across very specific sets of documents that don't change much or don't change frequently that we want to do retrieval across, we can kind of exploit the fact that we can hold this long context in memory, right? So we don't actually have to do these repeated loads and unloads. So we're going to wind up with less of a latency hit. The idea is that at some point, you know, especially as we continue to discover more about, you know, efficient long context, we're going to be able to do both. And, you know, it's one of those things where if one is 2% better on one task and the other is, you know, 2% better on the other task, and there's not a serious cost to use both then we should probably we should probably just use both right like it'll it'll be better than uh than either individually yeah yeah okay because you can use caching with both you can use sort of the these these cost memory saving sort of things with both you can uh are they are they both sort of becoming just, you know, table stakes here, Leo? Is that is that sort of fair to say? How would you like to sort of wrap this for the audience here? Yeah, yeah. I mean, I think that's like actually very probably like like a pretty deep statement in the fact that they're actually not that dissimilar at the end of the day. And yes, definitely using both. And my suspicion is that as we, again, super at the frontier of all of this stuff, but as we go forward, I imagine that rag and long context will just be parts of a spectrum. And the one piece of evidence I have in my mind is, again, don't mean to sound like a broken record, but like thinking about like people and humans, like the way our minds work is there is like some piece of information that we have like packed up back there, like somewhere in our brains that we reference and we look up and there's some pieces of information that we have at the forefront. Right. Like I was reviewing all the work we did for long context before this talk. And that's like in my current long context, we do both. So I think it stands to reason that AIs are going to do both just the same. Yeah. Yeah. You're studying for the exam. That's the long context. Right. But like the sort of like TED talk nuggets that like entered your brain and solidified and crystallized maybe that's the rag yeah super exactly things stuff guys okay I'm gonna give a little more context here before the next discussion thank you so much guys we'll have you back in just a moment so I want to talk about the context window now, because this is sort of an important point that I think goes misunderstood and understated. When we are thinking about the problem that we're solving with long context, we're talking about the fact that the model has only learned correlations across sequences of length, whatever it was trained on. So the Harry Potter books keep coming up. It doesn't know things across the whole series necessarily because it's never seen a single input sequence of that length. And this is where we get into this sort of distinction between context window and input sequence length. Let's take a look at some of the latest models. Llama3 said, we trained the models, quote, "'trained the models on sequences of 8,000 tokens." Trained the model. They didn't say the context window was this length. Whereas GPT-4 Omni, they say the context window, 128K. The thing that I want to tell all of you today is that really there is no sort of in and out when it comes to context window. There is only context window. So I was looking through the GPT, I guess, forums, the OpenAI forums, and there was just a great sort of upvoted, you know, hey, what exactly is the difference in input? The context window includes all input, output, and control tokens. Now, this is very important because the context window is to be thought of as all the stuff that goes in and all the stuff that can be held that comes out. So this is kind of a difficult thing to conceptualize. And I want to bring our experts back up to the stage to talk a little bit about how they think about this context window. I think oftentimes, guys, we see that there's a sort of context window length that you can put into an LLM and everybody thinks like that's the end of the story here but actually it's more than that it's the input and output that need to be considered and so like how do you think about context window and balancing your understanding of input i.e. sort of max input sequence length with context window in general. Let's kick it off with you, Leo, here. Yeah. I think like on a like slightly more like technical side of things, the whole like you're absolutely right, right? Like the context window is everything, right? It's like the system prompt it's what you put in and it's what you get out uh the reason that's true is is exactly what makes like these AI models so powerful is that it is able in its memory for figuring out the next token or the next word reference everything that's been it's seen before right and so like when it when it gets to its output in order to to tell you the right thing uh it needs to know about everything that you've inputted and also you know you mentioned open ai like everything open ai put in the system prompt um and and so that's exactly why like technically speaking the the context window is is everything and like you know when it gets to the very last word the output it needs to know everything that it's written before and everything that you gave it and also the system prompt um yeah and so like yeah but well go for it yeah keep going you're the scientist here we'd love to hear more of your perspective you know oh i was i was very rapidly reaching the end of what i wanted to say. I think probably you all have more to say about the relationship between input and output sizes. Yeah, so go for it. Yeah, I mean, if we're sort of thinking of this predicting the next token thing and this sort of idea of predicting the next word, everybody's like, oh, these GPT models, they're just next word predictors. oh, these GPT models, they're just next word predictors. I mean, Wiz, can you cast this into sort of an input and output context for us, let's say, as we think about the next token and sort of like, how many next tokens can we print out before we start to run into this issue of, I've now filled up my context window. I need a longer context window. How should we think about this? Yeah, I mean, the idea is it's like a tank that can fill up, right? If you add more tokens to the context window by, say, generating them, and then you look at all the tokens that you've had as input, as your system prompt, and that you've generated, eventually you'll have generated enough tokens that you're no longer, you've used all the context window. And something's got to give at that point, right? The models are not built to be hyper flexible in this sense. So we're going to have to tackle the fact that we're going to just start dropping context from the, say, the beginning of our input, right? So we can slide it along. We can continue for as long as we'd like, but if you have, say, a context window of 100, right? And you generate, you give it an input of 50 tokens, right? And it generates 100 tokens. When it's generating that 100th token, it's only looking at half of what you gave it, right? So this is the idea that's kind of floating out of context. And I think that, now I want to be clear, because there are literal hard-coded API limits for a lot of these services that you're going to use whenever you're serving models, whenever you're interacting with API models. Like the API or the application layer is going to put limits on how much stuff you can send it and then how much stuff it can generate. So you're going to see things like maximum input and maximum output, but those are, are, are artificial limits or it, I would say arbitrary, but that feels a little bit crude, but the idea is they are, they're well-reasoned limits that are, that are imposed by people. It's nothing to do with the actual nature of the model. It's just to do with the fact that like, typically, we don't want models to generate for very long time, especially if your business is paying for the compute, right? You don't want the model to generate 700,000 tokens because you're paying for it, and it's probably going to impact latency of other users. And then you're also, you know, you can't let people put in too much because you know, you're going to generate X number of tokens, right. Or up to X number of tokens. So you can't let people put in literally the entire context window, or you're immediately gonna lose context. I'm assuming that you're handling it gracefully even which some, uh, serving platforms don't do. So, okay. So quick follow-up on that. So if I'm generating tokens and then I'm sort of running out of context window, does that mean I'm losing the system prompt? Or can I be more flexible about the way I'm doing this? Is this kind of the way that most of these systems work? I think most of them just don't handle it, to be honest with you. So if you look at the popular serving platforms, they just don't let you do that when the context window gets too full, quote-unquote gets too full uh quote unquote gets too full they'll they'll simply stop working and in fact most of them are designed so that can't happen uh under reasonable circumstances so they limit the amount of input tokens and they limit the amount of potential output tokens so we never reach that point uh but of course with with the brilliant engineering teams that we have uh you could certainly design the system so it preferentially drops the input tokens versus a system. But at that point, what are we doing? Sure, sure, sure. Yeah, yeah. Okay, so then Leo, these long context windows can actually slide and have more input or more output. Is that right? Yeah. I mean, I actually think it might be a little hard to say exactly what would happen if you kind of like give it more context than it was trained on. Like, I don't think it's like necessarily clear what, like, you know, this kind of like goes back to, you know, what a model knows and what it doesn't know. You know, imagine if you yourself have like only ever read, I don't know, 100 page books, and someone gave you like a thousand page book and said, hey, go figure it out. Like you might do something reasonable, but you also might just start doing something completely unhinged, right? And so I think like, you know, trying to put my like product hat or like engineering hat on, I think imposing a clear restriction of, hey, let's keep the context within bounds of what the model knows about. Like maybe is a more like it is a clearer experience or a more expected experience than, you know, go ham and but but God help you with like what the model is going to say. That's right. That's right. Okay, so if the whole point is to be fact checkable and avoid hallucinations, we'd still want to get into this mess. Yeah, yeah, yeah, yeah, yeah. All right. That and just like a point of clarification here, Leo, is it true that like we put all of the tokens that are currently in the context window that is all input and all output each time to predict the next token? Is that sort of what's happening here? Sort of running everything back through each time? Yeah, yeah. There's like caching things that you can get into where like you don't have to like redo all of the computation from scratch. But yeah, exactly. Like you generate the next token by giving it everything that it's seen before. Yeah. OK. And yeah, join us next week. We're talking next token. Exactly how to do these predictions. So I want to go into training now a little bit, and I want to talk about how you guys actually created this long context setup. So like, Leo, I'd love for you to just talk a little bit about the training process and sort of technically, I know you guys had to get access to all this compute. How should we even think about what you did? You took this model that was massive off the shelf, and then you sort of, is this more of an unsupervised pre-training thing, more of a supervised fine-tuning thing, or is this sort of somewhere in between in this sort of continued pre-training space? Yeah, yeah, great question. So in training, kind of like a long context model, there's basically like two main challenges. One is just the pure computational challenge. You may have heard about attention is like quadratic complexity in the context link uh that's exactly because every token needs to be able to reference every other token and so like things that are quadratic don't necessarily scale well and so you like very quickly start like running out of memories these things get huge um and then the other side i think we already touched on is just the fact that like the model has never had to deal with tokens that are, you know, a million length apart. And so like there's probably some amount of like instruction or training. And you need to give it examples for it to like now be able to figure that out. And so those those two pieces, you know, we can get into it in more detail, but, you know, those are kind of like the two things that we had to figure it out. And then the other thing that you asked was about kind of the, you know, the unsupervised pre-training versus the fine tuning. tuning. And yeah, it's a bit of both, right? Like one of the things that we found works pretty well is to do unsupervised pre-training on long contexts, right? Like typically how the pre-training works is you give the model some text, you know, whether it's like code or a book or what have you, and you have it like use the previous part of the book to predict the next token. Do the same thing for long context, right? that's like a very straightforward thing to do um one of the uh issues with with just doing that is it becomes a kind of like a single a single purpose model right like all it knows is how to like give you the next part of whatever you give to it. And so if you want it to then be able to like chat with you, for example, you got to layer on other stuff on top of it. And so so that's what we did. Like we first did the pre-training and then we layered on. It's called like instruction tuning. But you tell it how to chat. That's right. So did you take the base model off the shelf then or did you take the instruction? OK. Oh, no. So sorry. Sorry. Sorry. Take that back. We took the instruction to model. I preempted the question. Yeah, yeah, yeah. Okay. Okay. All right. Yeah, yeah. That's, that's in line with I think all the things that are kind of viewers have heard from us, like, you know, take the instruction tuned off the shelf when, you know, why not when possible? Why not? It knows more stuff. Yeah. It can do more things. Yeah. Can you give us a sense too, of like the, the kind of like problem that you're solving when you really need this long context thing? I mean, you know, tell us Harry's entire life story or something. I can imagine, right. That's a sort of the Harry Potter example, but like, can you give us some sort of sense of what these data sets look like that you're using or the problem spaces that Enterprise is thinking about with these longer context windows? Yeah, they're definitely related. I would say on the on the data set side, one of the things that we ran into is there's just not that many uh like pre-existing long contexts out there like you know at one point we were like scouring like project gutenberg to see what are the longest books out there and it's like yeah like you got like war and peace i think it clocks in at like 700 and something k um I think you have like Les Mis. It's also like 600 K. There's only a few of them out there. So pretty quickly you have to get into like synthetic data generation. And yeah, we can chat about that too. On the use case side, there's a few that that we've thought of um maybe one of the more compelling ones is is on coding um you know for like right now like coding assistants they kind of like know pretty locally like around the the code that you're writing and and maybe like slightly more than that and so they're pretty good at like uh giving you like the the next part of the line that you're writing in Python or whatever. Right. But now imagine all of a sudden you can throw like your entire code base into it. Now we can basically if you get like much closer to this idea of describing the feature you want, maybe giving it enough detail, and it's able to pull in all the stuff that it needs. So that's one use case. There's a couple. Yeah, Wiz, what do you think about that? You think we got the AI programmer coming soon? That is a great use case. I mean, because it's like, I can't really think of things that are that long. You know, it's like the entire code base of something that that's potentially massive yeah so good um anything to add on this Wiz no I think yeah I think it's well covered I mean uh you know the AI programmer is always I feel like six weeks away that's it you know so yeah brought out to Alan and Kevin by the way yeah that's right uh watch out here comes kevin uh we got that on our youtube channel as of yesterday we'll still be six weeks away six weeks from now but i think it is that's right it's better right i mean like uh and when it comes to uh when it comes to something like the the long context uh in this idea of like basically using the the as like a glorified cache. It is an entire code base is a great thing to have it in cache if your company has a very large code base, right? It becomes worthwhile to answer really, you know, really complex questions that require that level of context, you know, quickly. So absolutely. I mean, I can see it being a super, super valid use case. Yeah. Yeah. Okay. Very cool, guys. Well, we're going to go ahead and start the last portion of our presentation today, where we talk about eval next. And what we have prepared a short demo for you guys today because we don't want to make it all panel discussion to give you some some more context on how people are evaluating these llms today a lot of times you'll see for instance this is from gradients post on the 8 billion 1 million context length and you'll see this sort of plot. And this is sort of called the needle in a haystack, where you have here on the y-axis, you have depth percent, sort of how far into the context window are we going? And are we looking for this needle in our haystack? looking for this needle in our haystack. And then we have sort of this token limit, token number that we're sort of looking versus this depth percent. Okay, so we're sort of how far into the different size context windows are we going to look? And you see here, we want this thing to kind of be all green and green of course is this one here. And it means you found it, right? Red down here means, ah, you missed it, right? And so we put this little sort of nugget, this little needle in the haystack, the sort of famous example is the sort of like, you know, kind of magic number. And we sort of just put it in the context and we see if the LLM can find it. So what we did today is we prepared a short demo where we're actually going to see if we can do this needle in a haystack approach on GPT-4 Omni, this 128,000 context length, long context, let's say, LLM that just came out last week. And Wiz is going to present this for us to give you guys a little bit of an insight into how this is happening and then we'll come back and we'll talk about exactly how gradient was doing this with all the models that it came out we'll talk about the future of evaluation and we'll wrap up for more q a in the slido, over to you. Oh yeah, okay, so this is pretty straightforward, thankfully, to some work done by this Greg. The basic idea of needle in a haystack is very straightforward, right? We are going to fill up the context window with stuff, and then we are going to ask a very specific question about something that we've added to the stuff. So first of all, let's just let's just talk about, you know, some dependencies, we need this needle haystack, we need this laying chain community. And we're going to grab the repo that has this, this library, because we also want to yep sorry you guys I will share the the link to the notebook here but the idea is right we're just gonna grab this repo so that we have all of Paul Graham's essays this is like a kind of the the classic example of this right is we use Paul Graham's essays as the haystack so the stuff we're to fill our context window up with is those essays. Now, we're going to need to use this nest async apply so that we can run the async loop in Colab, just necessary. We're going to use OpenAI as both our model that we're testing and our evaluator, right? So in order to prevent situations where the model answers with the correct response, but perhaps using slightly different wording and yada yada, we're going to use an LLM to determine whether or not the response contained the correct answer. So that's why we're using an LLM as the evaluator here. We're going to use a bunch of different parameters. The basic idea is pretty straightforward here. Like we need to make sure that we set everything up and then we can just kind of run this. It is going to take like a while because we are testing a lot. So in this case, we're only going to test a total of 10 intervals and 5 depth percents. So let's talk about what these things are doing, right? So model to test, straightforward. We're going to use OpenAI, and we're going to test GPT-4.0. Our evaluator is going to be GPT-4.0. It doesn't matter that these are the same because they're not doing the same thing. We're just using this evaluator model to see if our response contained the correct answer, so if it contained the needle. The true answer is Flaming Hot Cheetos. The question asked is, what specific food does Jerry like? This is homage to Jerry from Llamaspace, who used this prompt when doing some NIH a little while ago. And then our needle is Jerry likes Flaming Hot Cheetos. So this is basically, the needle is just some text. And the idea of needle and haystack is we fill the context up to some amount with uh you know paul graham's essays and then we place the phrase jerry likes flaming hot cheetos somewhere in that context now the the important piece is that we can place it at various depths which means say for instance zero depth is the start of the sequence and 100% depth is the end of the sequence. And then we can place it at various intervals along that, you know, beginning to end, right? So this is the idea that we want to see, does the model have blind spots potentially in, you know, in the middle, or, you know, towards, towards the end, or towards the beginning, right, this is kind of seeing where does this retrieval task, where is this retrieval task successful. Now, we can also set up the context lengths, so we go from 100 tokens, so that's only 100 tokens of Paul Graham's essays, plus our little phrase, all the way up to 118,000 tokens. 128K is the maximum for GBT4O, but we just chose 118 to give it some wiggle room, as the library sometimes overestimates and overshoots this number. And we're going to split it into 10 intervals. And then we're going to do this, you know, we're going to basically place the token along every 20% as we do the evaluation in that input sequence. Easy peasy. That is it. So the whole idea here is we fill up some amount of context, depending on what interval we're on, with polygram essays, and then we plop that token in at the according depth. And then we see if we can answer this question and get this response. So we run this. It runs for a while. It took about five minutes. And it can cost, right? Because we're just running an API endpoint. Luckily, GBT4O is cheap. So it's not costing too much. And then, of course, we can use our, we can create the table, right? So we'll create a data frame frame convert it to a pivot table and then we're going to use plot lead to plot it you'll notice that there's some blank spaces this is just where uh the the library failed to retrieve a response so they're nans so they show up as clear uh but the idea is and that's the not the model's fault that's the uh library's fault, but this is it, right? So the idea is, and how to read this chart, right, is at 80K, right, at each of these tested depths from zero to 100, the model always answered the question successfully in each of these trials, right? So it didn't matter where we placed the needle in 80K context, it always found it and same with up up to the maximum uh which is going to be 113k is the actual uh value that it used we are able to we are able to uh retrieve or fetch the token in all cases, all the way up to 118K, right? So what this means is that up to its maximum context length, we are always able to successfully answer the question. We'll just scroll up to double check that it's what specific food does Jerry like? And we always get the response, Jerry, or a response close to it, Jerry likes flaming hot Cheetos. So we always have something that says that Jerry likes flaming hot Cheetos. So we always have something that says that Jerry enjoys flaming hot Cheetos. And so what this means is that on this, albeit very simple task, this model is always able to retrieve the correct sequence that we've injected into those polygram essays. Now I do wanna talk about something very quickly, which is that this test is awesome, but it's only testing for that retrieval task. And so we're gonna go back to Greg and we're gonna hear a little bit more about, other ways that we can evaluate and what this truly means for evaluation. So I'll send it back to Greg. Yeah, Flamin' Hot Cheetos are fire. Very cool that Jerry's a fan too. So the needle in a haystack is all about putting something in there and trying to find it, right? We're looking how far in are we able to find it? And then we can sort of expand this idea. The sort of multi-needle in a haystack, we can look at, and this is from a blog that LankChain put out, more needles, right? Longer contexts, you see sort of number of needles is the number of unique facts here, context length, smaller one is green, larger one is red here. And so we sort of see this decreasing performance of how many needles we're able to actually retrieve with the longer context and more needles, right? Makes sense. And we can sort of extend this even further. So there was a paper put out called Ruler, and this was actually from NVIDIA. And the Ruler paper sort of combined the retrieval aspects here, the multi-needle in a haystack sort of retrieval aspects with then this kind of variable tracking approach that was sort of aimed at doing this multi-hop tracing. And so what we're trying to do here is we're trying to look at sort of multi-hop connections. We're trying to look at, you know, a step beyond sort of, Hey, do we found this sort of one thing? And then we can also kind of look collectively and do some aggregation as well, where we're looking at common or really, really frequent words, and we're kind of able to look a little bit more at the long range context through the variable tracking and looking at common and frequent words. And together, these sort of constitute this idea of this ruler approach. And there's a paper link in the slides. It's not popping up right here, but it'll be available post-session. I want to sort of bring our guests back up and kick off the Q&A for today. We're going to stick around for a couple extra minutes to answer some of your questions. But when we think about eval, and Leo, start with you on this, the needle in the haystack thing sort of is towards this kind of perfect retrieval system at least but it does seem that there's this real kind of you know if the motivation is to solve the problem of very long range you know correlations across big context then that's really a little bit different than the perfect retrieval, right, that we talked about earlier. It's, you know, are we evaluating both of these things well today? And if not, what's missing? Yeah, I think the discussion we've been having previously kind of like perfectly sets this up because when you were describing when you're both describing needle and haystack i was thinking about it and like yeah this is exactly a test that rags should just like completely knock out of the park right like literally what rag is designed to do um and so i think you know it's it's a very important primitive for long context, right? Like it's got to get this right. It has any hopes of doing anything more sophisticated, but it is very much the first step, right? Like being able to grab like information in history and replicate a pattern. I think this is like similar to tests folks used to do on language models like a couple of years ago. It's called these like induction head tests. And I think like there was I don't remember the name of the research paper, but it basically showed like a causal link between doing well on needle in a haystack like things and then being able to build up to in context learning so it's like an important first step um but like exactly as you put it for like perfect retrieval and all the stuff that we've been talking about um it's not just pulling out one fact but it's being able to pull out multiple interrelated facts and being able to reason about them um I really like that you talked about ruler because I think it's like 100% a step in the right direction, especially on some of those like later tasks that you mentioned, like the variable tracking, right? You have somewhere in the haystack that like x1 equals 10, somewhere else that x2 equals x1, somewhere else that x3 equals x2. And so like, again, it's got to be able to reason kind of like across these pieces X two. And so like, again, it's gotta be able to reason kind of like across these pieces of information and the same thing with all the other ones too. So yes on ruler and also yes on kind of like bridging that continuum between the very easy things and then kind of like, as you mentioned, perfect retrieval or where we're trying to head with kind of these like real world use cases. Yeah. Yeah. So can we expect like a paper on evaluation from you guys anytime soon or what's going on inside a gradient that is sort of related to this? Are you guys coming up with custom things or are you forced to confront this issue with enterprise? Kind? How's this manifesting for you specifically? Yeah, totally. I mean, I've actually just been talking to the ruler folks at Nvidia very recently. We've been talking about collaborating on a few things. I think they're maybe slightly tangential. But one of the interesting things that that comes up is this distinction between uh like the prompt and and the the task um and and and basically it's that um not every model like expects prompts to be written the same way there's actually some really interesting like uh like Lama3 and how it's actually better to have the instruction at the end of the prompt instead of before the example. And it's actually reverse of what GPT expects. But anyways, if you want to be testing the model's performance on long context, maybe it's like you should be optimizing the prompt a little bit to the model. So we've been talking about that kind of stuff to maybe give like a slightly maybe like clearer or more nuanced picture as to like how do you compare all of these different models. But like for us internally, again, like what we're super excited about is actually like using these models in the real world. And so that's the direction that we've been going. You know, like I think for learning how to improve the models, evals are great. And for comparing models, evals are great. But like I always say, hey, let's let's iterate quickly. Let's let's throw code base into this thing. Let's throw all of Shakespeare's works into this context and see if they can write like more poetry. So that's kind of the direction we've been going in. That's right. That's right. Yeah. Show everything in and then see if the person that does the job now says, yeah, I did a good job on that or not. Right. Very, very cool. Okay. So, you know, just we'll start Q&A now. In conclusion, rag and long context, both necessary. We're talking about this long range, also about retrieval. We heard this sort of four to 5 million is where we hit this diminishing return. So don't expect to see 10 million from them anytime soon, but we're expecting to see great things coming out of gradient and this eval is definitely still emerging. So please upvote your favorite questions. You know, In the spirit of building, I wanted to kick off with one of the questions that I saw from anonymous here that says, my app works-ish. Why should I consider an LLM with a long context? Won't this cost more and take longer? Leo, what do you think about that? Leo Sack I mean, I think it probably depends on the use case. You know, I think if you have an app that works-ish with short context, you know, maybe you don't need to add in kind of like superfluous or unnecessary context and pay for it. I do think there are other cases where adding in a bunch of context is helpful. So it's probably like fairly task dependent at this point. Maybe in the future, once these things get more efficient, once there's more kind of like engineering built on top of it, once the model itself is able to discern maybe a little bit more how much context to use potentially, then maybe the answer is like, just use it for everything. But yeah, I think for now it is a little bit on the user to decide where to employ this. Yeah, yeah, absolutely, absolutely. And I think this gets back to the performance versus cost issue. You know, if you're not gonna increase performance that much, maybe you shouldn't spend more, you know? Yeah, it's like, hey, let's do stuff that makes sense from an application standpoint. So technical question here, Leo, with attention scaling quadratically with context length, for one million context context are dense attention matrices still used or is there some sort of different uh tension setup that you're using yeah um great great question um so the way the way that we trained our models, it does do the full quadratic attention, right? So this is like as granular as you can get, like the model can directly compare every token kind of like on that full context window. And the tradeoff, right, we're talking about kind of like cost versus benefit. And this is, I think, like kind of like the really exciting piece of work that we're like about kind of like cost versus benefit um and this is i think like kind of like the really exciting piece of work that we're like now starting to look into is like you know maybe maybe that's that's not entirely necessary for all tasks uh maybe the context that that's pretty far back uh maybe you can compress that a little bit more right um and and at that point uh you start getting closer to the realm of uh efficiency that that's a little bit near kind of like the short context models um and where this kind of comes up is maybe a little bit less on the training because you know like yeah sure like there are uh you know races to speak, of like getting like the first whatever long context model out. But at the end of the day, you're doing training offline where this really comes into practice is for serving the model. Right. Like if you you probably don't want to give the model a question and have to go take like a 10 minute coffee break every time you do. And so like for these kinds of like actual serving use cases, I think sort of maybe like where this question is implying or like leading to looking at ways to compress information further back in time is like pretty important. Hmm, yeah, Wiz, this is something we've been talking about a lot recently, right? And a lot of people are paying attention to this, you know, with Grok coming on the scene and all this other stuff happening like there's training and then there's inference and like it's different and so like how should we be thinking about this wiz in terms of like long context versus rag training versus inference it seems like we covered a lot of questions today that are in the chat, and maybe this is a nice one to end on. Give us your thoughts, and then Leo, you can sort of close us up. Training versus inference, long context versus RAG. How do we sort of navigate this decision space, thinking about computing and leveraging these tools? Yeah, I mean, so the way I understand what you're saying, I think the big thing to look at is going to be inference, right? So for training compute, unless you're, you know, you're someone like Gradient where your job is serving awesome models to do awesome things, I don't know that you're going to be, like, wanting to spend highly in the training realm. You'll wanna do fine tuning for sure. You'll wanna leverage these PEF methods in order to do that cost efficiently. But otherwise I think you're not gonna have a ton of training budget. When it comes to inference though, you're not going to have a ton of training budget. When it comes to inference, though, inference is a space that is very quickly moving, right? So the folks at VLLM, folks at NVIDIA, of course, folks at even AMD, like everyone right now is, and of course, Grok and these kind of bespoke LLM chips or whatever, you know, they're going to call them. And I'm just going to not mention Google's TPUs, I guess. But the idea is, you know, there are so many different platforms that are doing so much great work right now. I think we're spoiled for choice right now uh and you know it it doesn't take much to get a 10 percent uh you know improvement right in in inference latency so uh versus like this rag uh context window thing i think that's a much more nuanced conversation um where you know we're not seeing kind of those huge jumps in performance. We don't even know how to evaluate base non-long context models well yet, right? That's still not a solved space. We have ways we can kind of do it. And we have golden data sets and this kind of thing that help us to do it. But in reality, we don't have even the tools to evaluate like small, smaller context models well, right? And so this idea that we're then going to be able to evaluate a longer context model super well becomes, you know, like Leo said, step in the right direction with ruler. We're always marching forward, right? We haven't marched back yet, but you don't even know how to evaluate them. And then with RAG, it's the same problem, but now we have six moving parts. So, you know, that's a very nuanced conversation. But the way I think about it is start with RAG and start with whatever your favorite, whatever platform you're currently on has inference optimization solutions, AWS, GCP, Azure, whatever system you're working through, they're going to have a system you can use to run model fast and then start with RAG. If you find yourself running against this wall where retrieval is just not enough and you have a pile of contacts that you need to be referencing I think long context becomes worth the cost very quickly um but it is a anyway I'll stop there yeah yeah and I just want to end like that was sort of a a messy question I think the fine point on it, Leo, is like, you guys released like six models. How should we pick which one to use? And how fast are they doing inference? Exactly. Do we have to wait five minutes unless we get a bunch of GPUs on the million or, you know, most people watching are going to pick up what and then most enterprises are going to pick up what? what and then most enterprises are going to pick up what yeah uh like like really good questions um you know i i think like and this i've been playing around with it recently for for some demos i think on a like you know eight uh gpu l40s cluster um it's doing like the 8B model is doing like 600K context length with like a 10 to 20 seconds, maybe closer to 20 second response time. Take that as you will, right? Like that's something that's like pretty easy to spin up just using like VLLM currently currently so you don't need a ton of like additional coding or magic on top of it um for for kind of like faster and and more efficient inference i i'm gonna say stay tuned like i know um you know we're we're definitely working on that um and in making these models a lot cheaper and easier to run inference on um I know other people are working on that as well. And I think, you know, like, it's a really interesting point of, like, thinking about, like, where to use RAG and long context and pre-training and fine-tuning. The thing that I'll add on top of that is putting examples in context, like in context learning is like way more sample efficient than doing fine tuning. And so like I think as like long context gets a little bit more developed, a little bit more like baked in and efficient, like you have this really cool paradigm paradigm of throw a bunch of examples into the rag, pick out even more of them to more than you could before into the long context and use that instead of fine tuning. And that to me feels like something that's much easier to work with than going through the whole fine tuning paradigm. And so that's personally what I'm like, pretty excited about as far as like upcoming use cases. Retrieve your way into fine tuning through the context window. Something like that. Okay. Very, very like that. Yeah. Yeah. All right, guys. Well, epic discussion. Thanks for sticking around to answer extra questions today. Wiz, Leo, discussion. Thanks for sticking around to answer extra questions today, Wiz, Leo. Really appreciate you guys. Great job. It's time to wrap up today. Thanks for joining us today. If you like this event, let us know. We are interested in getting more awesome speakers and brilliant scientists and engineers like Leo building, shipping, and sharing with us. Of course, like and subscribe. We will be back on YouTube next week talking about the details of next token prediction. It will really connect sort of this big zoomed out view we had today with a very, very zoomed in depth view. If you haven't joined the AIM community yet on Discord, jump in, start building, shipping and sharing with us. We don't bite and neither does anybody else in the community. If you have questions, you run into issues, error codes, go ahead and jump in and ask your question. There's people around to help. And then if you're interested in joining our very next cohort that kicks off next week on Tuesday. This is going to be AI Engineering Cohort 3. We just had an absolute banger of a Cohort 2 demo day last week. Check out those videos on YouTube. That is available. We still have seats left for that cohort. So check it out, get your application started. And even if you don't actually enroll, you'll learn a lot just going through the application process. But that's a wrap for today, everybody. Thanks for sticking around late with us. Please share any feedback that you have about our event in the feedback forms that you'll get post-event. And as always, until next time, keep building, shipping, and sharing, and we, as well as Gradient, will certainly keep doing the same. Thanks, everybody. Have a great rest of your week. We'll see you soon.
Long Context Windows: Extending Llama 3
4,351
AI Makerspace
20240523
Join us live to discover how Gradient AI is pushing the boundaries of LLM technology with groundbreaking long-context versions of Llama 3! We'll explore how Gradient's small team outpaced Meta by releasing Llama 3 models with unprecedented context windows, reaching up to 4 million tokens. Learn the technical intricacies of expanding context window sizes from 8B to 70B parameters, the compute requirements, and the challenges faced. We'll delve into popular benchmarks like Greg Kamradt’s Needle in a Haystack and discuss with Gradient's Chief Scientist, Leo Pekelis, the future of RAG versus long-context LLMs. Don't miss this chance to understand the cutting-edge advancements in AI and get your questions answered live. Event page: https://lu.ma/longllama Have a question for a speaker? Drop them here: https://app.sli.do/event/9kSLiGTxM1CzkJKmpk3VDS Speakers: Leonid Pekelis, Chief Scientist at Gradient https://www.linkedin.com/in/leonid-pekelis/ Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 For team leaders, check out! https://aimakerspace.io/gen-ai-upskilling-for-teams/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA How'd we do? Share your feedback and suggestions for future events. https://forms.gle/dRMWrwHM9kjGc4A97
2024-06-10T01:43:59.013198
https://www.youtube.com/watch?v=ulTvNAXI_1E&ab_channel=AIMakerspace
Hey, Wiz. Hey Wiz, so agents, they're pretty dope and we've explored them before. Does that mean multi-agents are even more dope? Yeah, Greg, I think it does mean that. They're multi-dope. We've reached the next level of dopeness so you're saying that we can build something dope today and then use multi-agents to even up the dopeness technically speaking all of that is true yes okay so we're gonna technically increase the dopeness of agents. Wiz, I cannot wait to see what you've got in store for us. We'll see you back in a bit, my man. Welcome everybody. I'm Dr. Gregg. That's the Wiz. We're talking multi-agent rag today. We're talking multi-agents today, multi-agent frameworks. We're going to root this discussion in the patterns of GenAI that you need to know if you're going to build LLM applications. There's a lot of complexity to untangle, and we've got a short time to spend together today. So if you've got questions along the way, please drop them in the YouTube live chat or in the Slido link that we're dropping below. We will prioritize Slido at the end. Let's go ahead and get right into it today. We're talking multi-agents. We're talking multi-agent RAG. We're going to be using Lang Chain and Lang Graft to do our build today. So there are some things we want to make sure that we get a handle on today. And as we align ourselves to what we'll get out of this session, we really want to sort of get under and understand multi-agent workflows as an LLM prototyping pattern. Of course, we want to build one and we are going to walk you through exactly how to build a multi-agent application. That's quite complex today. So to get into this, we want to sort of root ourselves in the patterns that we're all so familiar with if you're joining us for multi-agent rag today the patterns of spongebob here and then we want to extend it these are so important because they don't go anywhere just because we add a bunch of agents to our applications let's take a closer look Let's take a closer look. When we talk about the patterns, we have to start with prompting. The definition of prompting is to lead, to do something, to instruct. Done well, we might call this teaching or training even. If we take teaching and training far enough into an LLM, we provide one-shot, two-shot, few-shot examples, we run out of context window, where are we left? We're left with fine-tuning as a method to teach the LLM how to act on the other side of the task-specific spectrum. Of course, optimizing how the LLM acts is one part of the puzzle. We also want to optimize what we're putting into the context window. We want to use as much relevant reference material and knowledge that we can get our hands on. We want our applications to incorporate context well. context well. And of course, RAG is the focal point of so many of our applications, especially the ones that we're actually trying to use to create business value today. When we talk about agents, what we're typically talking about today is we're talking about giving the LLM access to tools of various kinds, but not incredibly various. There's sort of a category, a main category of tools that starts to connect some of our patterns back together again. But let's think about this fact that agents are a pattern. What pattern are they? Well, simply put, they are the reasoning action or the react pattern. And the way we want to think about this is we want to consider a typical simple agent loop. We ask a question. The question is routed to our LLM. This is where the reasoning takes place. LLM might decide, hey, I know the answer already. Boom. Done. Don't even need to pick up tools. I can solve this with my bare hands, the LLM says. Or maybe we need to go in and pick up a tool. Now, if you squint closely in, you can see we have tools like Archive, tools like Search, like Wikipedia, like, what is that, DuckDuckGo right there? right there now what all these tools have in common we're going to go get some information you might say retrieve some information and we're going to collect it and try to then incorporate it into our reasoning that we're doing about answering this question. We might need to go back and grab another tool. We might need to see what it gives us when we engage with it and then incorporate that back in to our reasoning before we give a final answer. So the LLM here is where we're sort of doing the reasoning part of the reasoning action pattern. And this is important. pattern. And this is important. Now, when we go to the tools, when we go to the retrieval of information, what are we doing? We're actually sort of augmenting the thread that's going that our LLM is considering and reasoning about with retrieved information. We're kind of doing rag, aren't we? In fact, I'm going to put to you today that Today, in most cases, agents are just fancy rag. And we'll see this as we walk through exactly what we will build today. Armed with the patterns of prompting, of fine-tuning, of rag, and of agents, we can put together a more complex picture of what a typical multi-agent system looks like. Let's think about multi-agents. Why would I need more than one agent, you might ask? Why would I need more than one agent? You might ask. Well, remember how the agent in our picture just a minute ago was doing the reasoning. Well, consider that we might want our reasoning machines to be specialized, right? We want our reasoning machines to be specialists, just like out there in the world. Now, if the reasoning machines are to be specialists, and I want a bunch of them, where does that ultimately lead to in the context of AI? Where does that ultimately lead to in the context of AI? One question you might ask is, well, does that mean that if I had an AGI LLM that I could just use one agent? I want to bring Wiz up to the stage here for a minute to comment on this. So if I'm looking at specialists and connecting them all up together, isn't it kind of in the limit that the artificial general intelligence is going to understand everything that all of the specialists understand? So it sort of makes the idea of multi-agents not necessary. Is this a crazy way to think about it, Wiz, or is this technically sound? No, I think that's probably true. I mean, eventually, right, when we have some AGI, we could just have one agent do everything. I mean, there's a lot to be said about potentially, you know, this idea of expertise might not ever leave and so maybe we have you know specialized intelligences that are better than these uh generalized intelligences but i think the way that people use the term agi right is uh means that we would only need that agent or that system right right? We wouldn't need those specialists because they would be no better than this AGI. I mean, of course, it depends on how you define AGI, right? I mean, it's like, yeah. Okay. Okay. Okay. All right. Let's get off our high horse. Thanks, Wiz. Thanks for the two cents on that. Let's go back down to earth, everybody, two cents on that. Let's go back down to earth, everybody, because we have a job to do today. Let's talk about multi-agent frameworks, because presumably we don't have AGI today. What we're talking about when we're talking about multi-agent frameworks is we're talking about using multiple independent agents that are each powered by a language model, an LLM, let's say, potentially an SLM in the future, a little small specialized language model potentially. And we basically want to consider two things. What are the agents and what are they good at? And how are they connected? Now, if you think too deeply about this, you start to get lost in the patterns a little bit. So we're going to try to make it easy for you to understand why this is useful. to do things a little bit more cleanly in short. We can group tools and sort of responsibilities, almost like job responsibilities together. We can separate prompts instead of having, of course the infinite context window issue will tell you that you can sort of just dump everything in there and maybe you can, but it makes it really hard to sort of debug exactly where things are going wrong and this separation of prompts can also actually not just provide a cleaner architecture but potentially even give better results and then just conceptually it's going to be a lot easier for other people to understand what you've built. Now, there are many ways to accomplish lots of things you might try to build. I wouldn't immediately jump to multi-agent in almost any case. In fact, I would love to hear if you're in the audience today, if you've heard of an actual use case that you're connected to creating massive business value from a multi-agent use case. These things are still rare today and they are difficult to implement, but there are tools out there. And some of the tools include things like AutoGen. In fact, some of the things we'll talk about today in Landgraf were inspired by the AutoGen paper. This came from Microsoft and they call this a conversation framework. We essentially want to allow these multiple agents to converse with one another. That's what AutoGen is all about. You might have also seen Crew AI. of getting the crew of agents together and operating in a real cohesive unit that's crushing it together. Just like if you're on crew. And obviously lots of people are using this stuff. This is more of a low-code solution. Now we're gonna use LandGraph today. And LandGraph is all about, quote, building stateful multi-actor applications. This is like you can put many agents within graphs, directed cyclic graphs that track the state of affairs as you move through the graph. We've talked about Langraph before, we won't belabor the point, but it's all about adding cycles to applications built on Langchain. And in terms of cognitive architectures that you can leverage within Langraph, the one we're gonna focus on today is the router. It's going to be particularly useful. Now you might say, what's a router? Well, the TLDR on routers is that they choose the action. Remember that reasoning that we did in the beginning to sort of choose the tool. You might choose the tool. You might choose the RAG system. You might choose to go to another agent that is actually just a different prompt. And so when we think about the flows of multi-agent set up, these are the ones that you'll see if you go to the Landgraf repo on GitHub. There's the multi-agent collaboration, the hierarchical team, and the agent supervisor. When we do multi-agent collaboration, we're essentially trying to get two agents to share the same context. And just as we heard in the autogen paper, to have a conversation. Here we have a researcher and a chart generator, as well as a router. All three of these guys are agents. But I'm gonna sort of highlight the routers in our story. The research agent is simply sort of designed with the prompt. You should provide accurate data for the chart generator to use. The research agent is simply sort of designed with the prompt. You should provide accurate data for the chart generator to use. Chart agent is designed with the prompt, any charts you display will be visible to the user. This is a quite simple setup. The router decides which tool to call, be it the chart generator or the researcher. We can make this into a slightly different setup by thinking about our kind of router as being a supervisor. And the supervisor might choose to go to any given agent to delegate specific tasks that the user asks. Now, if we combine these two ideas, we get into something that's a little bit more hierarchical, where we actually have different graphs nested within our top level graph, where our top level supervisor stands. So this is a supervisor that is a router and the research team and document authoring team here are also both represented as supervisor routers at the mid level. All of these have prompts associated with them. We're going to simplify this slightly and use it for our build today. We're going to build what we're calling AIM Editorial, AI Makerspace Editorial. And it is a cut across the hierarchical team setup we just saw that combines the supervisor as well as the sort of collaboration idea. What we'll do is we'll have a top level supervisor. We'll have a research team that's going to leverage Tavoli search, which is a search API specifically designed for use with LLMs and agents. We will build our own custom rag system and then we will create an initial writer, a researcher, a copy editor, and an editor that will make sure our writing is super dope in our document team. At its core, if we think about what's actually going on here, at the top level, our supervisor is directing. It's deciding, it's instructing, it's delegating. These slightly more specialized research team and document team supervisors are doing something similar. We've got retrieval that we can do through Tavoli search or through our custom rag system. And fundamentally, each of the agents in our document team are using prompts. So you can see here is that if we look close enough, we can start to see why we might say something like agents are just fancy rag. We have a goal today and we want to write a dope blog on something relevant. Well, what's something relevant? We want to talk about long context. We saw this pretty sweet paper, extending Lama3's context tenfold overnight. As Lama3 told us, they would do this over the next few months. Also, shout out to Gradient AI releasing the one million context link a week ago. That was a pretty gangster move. And then this one is a formal paper on it, though, 8K to 80k with Qlora fine-tuning what we're gonna do is we're gonna set up our research team we're going to use tabily search and we're gonna use this long context window paper to build a custom rag system we're gonna set up our document team and the writer we're gonna tell it something like you are an expert writing you are an expert in writing blogs below are your files in the current directory the note taker will tell it you are a senior researcher tasked with writing a technical blog outline and taking notes to craft a perfect technical blog the copywriter will be our grammar, copy, punctuation editor, very tactical. And then our dopeness editor is going to be an expert at punching up technical blogs so that they're more dope, extra lit, more cool. Let's see how we can set up these teams using Lang graph right now. Wiz, show us how to do it. Oh, yeah. Okay, so here we go. This is going to be a notebook. So I'm gonna zoom in a little bit. So you first this is what we want, right? This is like the this is the desired output, right? So we have some input to some kind of supervisor, agent, and then we receive output from it, right? It does the task does the thing. So I think when it comes to the, the goal here, we want to combine two things, right? We want to combine kind of this idea of rag that we have. And we want to add this uh you know this potential agentic uh you know piece on top of it and the way we're going to achieve this is in parts and we're going to start with uh some basic parts and then we're going to slowly get a little bit more crazier and crazier now uh this is Now, this is an adapted thing from Langchain's own documentation. They have a great hierarchical agent example. We've made some changes here and there just to demonstrate how easy it is to work with these Lang graph systems. And we're just going to go ahead and get started. And of course, it's based off of the auto gen research that was done. So first things first, we need a bunch of dependencies. It's not crazy, though, right? So we need lang graph, lang chain, lang chain open AI, and lang chain experimental. And then of course, we want Qdrink client or quadrant, sorry, client, client, PyMOO PDF and tick token. This is, for those of you familiar with it, this feels a lot like a rag dependencies and it sure is. We're also gonna grab two API keys. We have our OpenAI API key and our Tivili API key. Now the Tivili API key is something that you'll have to sign up for. And on their free version, you get like a thousand requests per unit time. And basically the idea is it's like Google search, right? But through it through a clean API. There you go. So it's free to trial. And then if you want to go crazy with this, you're gonna have to pay pay the piper as it were. Okay, so the first thing we're gonna do, right, right. So I mean, multi agent rag, if we don't do rag, you know, it feels like we missed the mark. So we're just going to set up some simple rag here. Just gonna be over a single PDF, right? So we've got a lot of content on more advanced rag and everything like that for this notebook that's already quite long. So we're going to just keep it to simple rag. But the way that Langraph, LCEL work, you can extend this to however absurd of a chain or rag system that you want, as long as it's wrappable in a Python function, and it can take in some text and it returns some text, right? So I think this is a very important piece of the puzzle. All of these components that you see are hot swappable, right? Like we can change them however we'd like. That's kind of the beauty of Landgraf, right? So first thing we're going to do, put the R in RAG, we need retrieval. We're just going to load up this long context paper from archive. So I'll show you here. It's just like, you know this this whole context window thing is is it the rag killer right all of this this other stuff right so let's write a little blog on extending rag or extending context windows next we're going to chunk it down to size this is classic we're just going to do some uh some naive splitting nothing fancy going on here uh some naive splitting nothing fancy going on here uh we're gonna turn our one uh document into 15 smaller documents let's go then of course we need an embedding model right if we're gonna do rag we need to have some vector representation of our text assuming that we want we care about semantic uh retrieval which in this case we definitely do uh we're also going to create a quadrant uh backed vector stores this power by quadrant Quadrant is just a great vector store I mean that's the reason we're using it that's really it uh it's very good at its job um and uh it can it can scale very well so even though this is clearly like a toy example um you know Quadrant can handle very much non-toy examples, which is great. And then, of course, we're going to grab a retriever. So, we're just going to modify our vector store into a retriever. Thanks, LaneChain. Nice and easy. Then we're going to do the ANRAG, which is augmented, right? So, this is where we're going to add our context to our question. And we're going to give some instructions, talk about how it's a helpful assistant. It uses available context to answer the question. And if you don't, if it doesn't know how to answer it, it should say, I don't know. And then finally, of course, generation, because this task doesn't need like a ton of like incredible reasoning skills. We can just use GPT-3, 5 turbo for this part. There's, there's no need to use like a GPT-4 for this. And then we build a simple rag chain, right? So this is just an LCL chain. It's going to take some, you know, question, pass it into the retriever to get the context. And then it's just going to pass the question forward to the next step, which is going to pipe into the rag prompt, which is going to pipe into the chat model, which is going to be parsed as a string. So we can do things like this, ragchain.invoke on a question. What does context along context refer to? And then we get a string response. The context along context refers to a coherent text such as a book or a long paper that contains multiple independent text. Yada, yada, yada. You get the idea. Okay. So first of all, there's some limitation to this particular approach. So I just want to acknowledge those. Again, it's just an illustrative example. But, you know, this is a specific PDF, right? So we'd love it if it could take dynamic PDFs. And it's obviously very naive rag. So we'd love for it to be a little bit more complex. And you can do all of those things. And as long as the ending result is an LCL chain, nothing else will change, right? So if you want to tinker around with this, make a more involved rag chain, you can do that. Absolutely. As long as you can get the output, you know, as long as you can get the whole object to be an LCL chain, you're going to be just fine, which is pretty dope. Then we're going to make some helper functions. We need some helper functions. We're going to do the same thing over and over again. Let's just wrap it in a helper function, right? So first of all, we're going to create agent nodes. So these are nodes with agents, right? So you'll notice all this agent node does is it wraps calling the agent in a function, and it takes what the agent returns, and it names it, you know, or it adds to the state, we're going to get into state in just a second, but it adds to the status human message with the output. And that's it. That's all it does. Right? So this is the idea. We want to wrap those nodes. The reason we wrap the nodes is so that they work as expected with LandGraph, right? Where it's gonna take some state agent name, and then it's gonna give us this object that's gonna be compatible with our state. Very cool. So we have this idea of an agent node and we're invoking an agent, but how are we creating these agents, right? With a create agent helper function, of course, let's go. A few key things to keep in mind here. Number one, you know, we want to have kind of this boilerplate prompt on our system prompt for all of our agents, right? Because all of our agents that we're creating with this are going to be, you know, very, very similar under the hood in terms of their prompts. This doesn't have to be true, right? You can custom make each agent, but for us, it makes a lot of sense to just use the same boilerplate at the end of every agent, right? Your other team members and other teams will collaborate with you during, with their own specialties. You were chosen for a reason. You're one of the following team members. And then this classic, do not ask for clarification, right? We just want to go to the agent, get some response based on the tools that it has, and then send that up the chain. So this is the idea. Of course, we're going to be able to modify that with a system prompt. So we're going to be able to define more exactly what this agent is. We just have this kind of suffix that's on each agent prompt. There you go. Okay. Next piece is big. We have our agent scratch pad. This is unique to this agent right here, right? Whatever agent we've created, this is unique to it, right? OpenAI function agent, this is unique, right? So in our executor, right? This is it's one executor, which has, or which is comprised of this create open AI functions agent, right? And these tools, which has its own scratchpad. Now, this is something that's super important. Okay. So each of these little sub agents are their own agent. So already we're in, we're in, we're in multi-agent before we even get into the first draft. Right. But the idea is that they all have their own scratch pad and we're just going to populate the important stuff up the chain to the supervisors. And this is the idea of how these systems can work so effectively, right? Each agents can be able to do like a bunch of stuff. But that stuff largely just doesn't matter to the supervisor agent, right? The super right, just like in real life, supervisors care about the output. They're like, What do I get at the end here guy, right? So that's why we have this individual scratchpad for each agent. And then we're going to see another layer of that as we continue through the notebook. So that's going to create all of our agents. Then we need this idea of a supervisor. Now, I hate to be the bearer of mundane news. Supervisor is just a router. It just routes from one thing to the next thing. So it takes in, you know, it takes in current context, and then it makes a decision, where do we go next? Which which tool agent or which agent do we go to next? Right? What's worker do we go to next? And then, you know, if the answer is we don't go to a another team member, we're just gonna straight up go, we're gonna finish, we're gonna be done, right? So the idea of this particular team supervisor is just to act as a router, right? So say, hey, this looks like it needs this work done. And then it gets response back. Now it looks like it needs this work done. Gets response back. And now it's done, right? This is all it's doing. It's a lot of code to get there. but basically this is just a function. And we're once again going to create this open AI function situation. This is all it's doing. It's not crazy. It's not insane. It's just straight up routing. Where do we go next? So now that we've got those helper functions, that's just the helper functions. It's a bit of a doozy, even notebook, I know. But we're now going to talk about the research team. So remember, our total system is going to be comprised of, and we'll just go, we'll go back to the diagram for a second here. This supervisor agent, which is going to interact with this research team agent. Okay. And this document team agent. So what we're going to talk about next, back down the notebooks, I know the scrollings, you know, not flying. So sorry about that guys, but to just want to reference that document. So first things first, we have a tool using agent. What do we need to do? We need to give us some tools, right? Batman's going to have his utility belt or else he's not Batman. So we're going to start by creating a tool for Tivilli. Now, you'll notice we don't really have to do anything here, right? We're just pulling it from the pre-made integrations from Langchain tool, but we can create our own tools if we want, right? So we're going to show that next. Now this is, so technically we don't need to create a tool from our RAG LCL chain because LCL components can be nodes in a graph. However, we're just going to show it so that you can see how that tool creation happens. There's no real reason other than that to do this. You can just use the LCL chain. That's going to be fine as long as you make sure that the inputs are coming in correctly. So you might have to put a formatting component on the top of your chain. But the idea is here, we're just going to show it in the tool so you can see how easy it is to make the tools. So we just wrap it in our tool decorator, right? This just modifies the function below. And then we create an annotated parameter with, you know know it expects a string and the annotations query to ask uh the retrieve information tool and then we give it this doc string this doc string is important right so one of the things you're going to notice whenever you're working with agents graphs lane chain ai engineering we're always prompting right we're always prompting all the time right this is this is a prompt the lm is going to see this and the time. Right? This is a prompt. The LLM is going to see this and it's going to use it as a prompt. So remember, when we're writing those doc strings, it's not just random text for humans. Right? This is how the LLM is going to interact with our system. So it's important to write clear doc strings here. And then all we do is return that chain invoked. Okay. So now we have our two tools, our Tivoli search tool, and we have our retrieve information tool, which is our RAG pipeline. Next up, we're going to create some state. So we're going to add three objects under state. We're going to have messages, which is just a list of messages. So everything we've done up to this point. Team members, that's the members we have on our team unsurprisingly and then who's up next right so this is going to help decide where where are we going next right who who am i passing the ball to next uh so this we just write about that a little bit there we're going to be using gbt 1106 preview uh gbt oh one uh i can't remember the rest of the numbers right right this exact time but the newer version, the January version, is actually a little bit worse than 1106 at this task. For some reason, it just gets lazy. I think they attempted to fix that. It didn't work, so we're going to use 1106. So this is our LLM. You'll notice, though, that we are using GBT-4 here. This is no longer like GBT-3.5 is going to do. We need a strong reasoner. We need an LLM that can do a great job. That's why we're using GBT4 here. So now that we have our LLM, we need to start creating some agents. So we're going to create first our search agent, which is going to be based on that GBT4 LLM. It's going to have access to this Tavilli tool, and it's going to have this description that explains when we should be using this tool. And then of course we're going to convert that to a node. So now we have this search agent and we have it tucked inside of a node. That's awesome. We're going to do the same thing for our rag pipeline and then tuck it inside of a node. You love to see it. Next we create our supervisor agent. We're going to pass in that same LLM, GBT4. We're going to give it this text. Now, remember, this text is in addition to the other text that exists in the boilerplate, but the idea is that it's going to be able to pass to these separate tools or finish. We're going to let it know which tools it has access to. And then we can move on to the best part, right? Making a graph. We initialize a graph with the research team state graph. We're going to add the node that we created to our graph. We're going to name it search. We're going to add the research node, which is the LLM or the rag node, right, to our graph. We're going to name it paper information retriever. These names are actually pretty important. They have to be in this format. They can't have spaces and this kind of thing. So, make sure that you're naming these correctly. And then, of course, we're going to add our supervisor node. So, now we just have, have like three nodes chilling in a graph. You know, they're not connected to each other at all. Okay. So we're going to then create edges. The edges are pretty straightforward, right? If we're at the search node, we're going to return to the supervisor node. If we're at the paper information retriever node, we're going to return to the supervisor node, right? These nodes all lead to back to the supervisor. Now, from the supervisor, dependent on what's next in our state, remember we defined this next in our state up here, right? Dependent on what's next is going to determine where we go next. So, if it's search, we'll go to the search node. If it's paper information retriever, we'll go to the paper information retriever node. And if it's finish, we'll go to the end node. Now, two things I want to be very clear about here, right? Basically, we can go to the search or paper information retriever nodes, which are powered by those agents, which have those tools. And then they return to the supervisor or we can end in the graph. Now, this graph has state. And that state is its own state. So now we have agents with their own scratch pads. And then we have these nodes which represent those agents. And the entire graph has its own state, right? So we've got two layers of kind of keeping information apart here, right? Very important to think about. And then we just compile it and that part's great. And we set an entry point, right? We enter through the supervisor, easy peasy. And then we can use Mermaid to display our graph. It doesn't look beautiful, but it is right, right? So we have this idea that we can go from our JSON output function parser, which is like, you know, where do I go next, we can go to the paper information retriever, which goes back to the supervisor agent, or we can go to search, which goes back. And then this can also lead us to the end or finish. So that's the mermaid image of our, of our graph of our research team graph, right? Now, because we intend this to operate with another graph, right? We have to have some way to get in here. And the way that we're going to do that is through this enter chain. And we're going to create an LCL chain from our entire graph. This is the beauty of LCL, right? This chain represents our entire graph, our whole graph. But we could just straight, you know, just tack on another LCL component. Easy peasy. And then we can test it out. And we can see things like, you know, we enter, the supervisor says, we're going to search. We do some kind of retrieval with search. We come back and the supervisor says, we're going back to search. We do that. And then eventually the supervisor says, hey, you know what? Actually, we're going, now we're going to the paper information retriever, right? So we, the graph decided we would go to search twice and paper information retriever right so we the the graph decided we would go to search twice and paper information retriever once then it felt like it had all the information that it would need um dope okay now so that's the research team side right we created a graph the graph does stuff we're now gonna immediately think this is a single unit. This entire research team now is just this bit right here. Where it does this thing. We give it some query and it tells us which tools to use to research information and then eventually we come up with a final response that we are going to pass back to our supervisor. So this is the research team supervisor. This next one is going to this the CEO or however you want to think about it. So next up, we have the document writing team, the document writing team, we're going to go through this a little bit quicker. It's the same idea exactly, except instead of tools that relate to search and information retrieval, it's related to document creation and document editing. So we have our create outline tool, which is gonna open a file and put an outline into it and then save that file. Then we have a read document tool, which is going to open a file and read it. Then we have our write document tool, which is going to unsurprisingly open a document and write to it. And then we have our edit document tool, which is gonna, it's gonna blow your mind, open a document and edit to it. And then we have our edit document tool, which is going to, it's going to blow your mind, open a document and edit it, right? So we have these few different tools that we can use, right? So we have the ability to create outlines, which are going to be a document. Then we can read those documents. We can write new documents, or we can edit documents. All awesome stuff to be able to do when you're trying to write a blog, right? We're going to create this state for our document writing team. And it's going to be exactly the same as our research team, except we're going to add this current files additional parameter. And what this is going to do is it's going to just tell us what files currently exist in the directory it's working in. We're going to also have this prelude. All this is doing is it's saying, hey, by the way, this is the current files you have, right? This is how many files that you have. And that's it. We create the graph. It's exactly the same as the one we created before, but with some additional nodes. The idea is that every node goes back to the supervisor. So all paths from all of the sub-agents lead back to the supervisor. And then the supervisor can send it to any one of those particular agents. And that's it, right? So this is the idea. Then we can look at this and we can see it happening, right? So we see you can come in here and it can go to the doc writer, the note taker, the copy editor, the dopeness editor, and then eventually it can finish. Now, one thing that I do want to just keep in mind when we add these tools up here, right, we're going to, for each of these entities, right, we're going to have access to specific abilities, right? So this is the idea, is that we want to give our specific team members, sorry about all the scrolling here again, specific team members are going to have access to specific abilities and that's important. Okay. Now that's all great so far. Next step, right? We're just going to wrap this in the same thing we did before for our team, our research team writer. to wrap this in the same thing we did before uh for our team our research team writer and then we're going to see it work you can see here we ask it to write a short outline a linear regression write it to disk and what does it do it goes to the doc writer node which does exactly that and then we get a short outline that's written to disk and then if we look in our this is this is the worst for sure but if we look here we can see there is a linear regression outline that's created in text right in a text file in that temp directory that we pointed it to pretty awesome okay so that's what we've done up to this point we've created our research team and we've created our document writing team. And now we're going to go back to Greg, who's going to show us how we can tie these two together into a truly multi-agentic experience. Back to you, Greg. Awesome. Okay. So we've got our research and our doc team set up, the ICs are ready to do their work. So you know what time it is. It's time for the supervisors. And the thing about the supervisors that's so interesting, we talked about this before, they're just routing. They all have the same prompt. You are a supervisor tasked with managing a conversation between the following workers. Worker one, worker two, worker three, worker four, whatever. Given the following user request, respond with the worker to act next. Each worker will perform a task and respond with their results and status. When finished, respond with finish. Okay. Seems simple enough. So supervisors are just routers then, right? It they doing any reasoning? Are they taking any action? What's the role they're playing, these supervisors, exactly? I'll leave that as a thought experiment for all of you watching. But it is worthwhile to think about in the 21st century. We know what the ICs are doing. We can see their output. But for now, it's time to create these supervisors, make sure the work is going to the right place, being routed properly for both the research team and the documents team, up to the meta supervisor who is oh so agentic at the very top. Wiz, back to you to close it out. muted. Sorry, guys. Sorry about that. Thanks, Greg. But, yes. All we need to do is we need to, you know, get a new LLM. It's just going to be the same one, right? But then we're going to create our supervisor node. And the supervisor node, thanks for all the reminders and chat guys sorry about that uh but the the idea is we have uh am i still muted uh if i'm still muted let me know okay good so the idea is we just need to create one more layer. And all that layer does is it takes us from, right? So before we created two different graphs, instead of graphs, let's just consider those nodes, right? So this new supervisor, all it does is it tells us when to go to the research team or the blog writing team. That's it. I mean, you can't make this stuff up, right? This is all it's doing. Like Greg said, it's a router. We create new state, right? Which is just going to, you know, we have less things we need to keep track of since we're not so worried about the team members. There's only two team members and we have our messages. And then of course we have our next. So that's who we're going to next. And then this next piece is the best. We only care about the last message going into the new graph. And we only care about the last message from that subgraph. So we can think of it this way. We have this parent graph and we have care about the last message from that subgraph. So we can think of it this way. We have this parent graph and we have this child graph. But the only communication between those two layers is going to be the most recent message from either of them, which means that the parent graph or the meta supervisor, the ultimate supervisor, the one right at the top, CEO, supervisor, right? The one right at the top, CEO, whatever you're going to call it, it only sees the last piece of work from the research team supervisor or the blog writing supervisor, right? So this is huge, right? We only pass that relevant information. This keeps this state very clean, lets it be a very effective reasoner and powerful tool. And then of course, we need to build the graph. Well, the graph is very easy to build because there's only two nodes and they both go back to the supervisor and then the supervisor decides if it's gonna go to the blog writing team, the research team, or it's gonna be done. And we can finally use the thing. And ultimately when finally use the thing. And ultimately, when we use this, it's going to, you know, send a, it's going to do this whole flow, right? And this whole flow is going to go through research team, blog writing team, research team, blog writing team. You know, it probably won't do that many iterations, to be honest with you. Usually it does two or three, but at the end we get this right this output just a full blog on the paper i mean listen is this the best blog that's ever been created okay i'm not gonna say that but it is a blog it was created from the paper it did go through dopeness editing copy editing right uh we can see that this is uh you know pretty dope results are nothing short of revolutionary that's pretty hype language. That's something that our dopeness editor helped with. This is the idea. This part is very straightforward. Each of those sub nodes, right? Each of the sub nodes or sub graphs, we just consider a node. That's the power of laying graph it's an entire agent graph but we're just like it's a node you know who cares uh and that is it uh so good with that we'll pass you back to greg so sorry about being muted guys thanks for calling me on the chat and uh also don't forget to like comment subscribe smash the notification bell i know it's kind of kind of silly but it does help we're here every wednesday we love talking about this kind of stuff and uh i'll pass it back to dr dr greg to uh bring us to q a bring that bell baby yeah you crushed it wiz thanks man so we got a gentic with the meta supervisor and now we can think about this idea of multi-agent RAG in the context of the patterns of generative AI that we're using to build LLM applications. We saw how this all came together in a single build, been through a lot in an hour together. And we can also start to see why we can say things like agents are just fancy rag. Now, remember, these are useful because the grouping is very helpful. the separating of prompts is very helpful, the conceptual models are very helpful. Again, let us know if you come up with any sort of must-have multi-agent use cases. I would love to hear about them. But the patterns, the patterns, the patterns, they're just everywhere. Tools all have prompts and supervisors or routers and searches, retrieval. It's a lot to get a handle on. And we hope that this helped you out today. If you have questions, please let us know in Slido. We're going to start taking them now. And we look forward to creating more multi-agent content in the future. We want you guys to take this notebook and create your own blogs with it. We will liken some to those and maybe we will. If we can get this good enough, dope enough, Chris, create our own AI Makerspace auto blogger, the AIM editorial. Okay. So two, Slido, which platform is better for multi-agent rag? Langchain or Lomindex? Langchain. Boom. Okay. All right. And can we get just a why real quick? How come? I mean, stuff like LCL is just, it's such an effort multiplier, right? We make one thing, we can just straight use it in the next thing. Yeah. It's tough to beat that right now. Yeah. I love the second question so much. Seems that everything can be done with a single agent. Only difference is the forced sequence of agents, of tools. Is there something else I missed? Anonymous. Yeah, I think maybe a little bit. So there's no forced sequence of tools here. The agent is free to select which tool to use when, in which order, how many times. Yeah, that's the idea. So I would say the different sequence of agents is kind of where we get this. Could it all be done with a single agent? Maybe, right? So you could just add all these tools to one agent. But the idea is that this compartmentalization is supposed to make the LLM has one fewer decision or sometimes four fewer decisions right if we're using the four writer tools right this is the idea is that we instead of choosing between like 12 tools is choosing between two tools or three tools or four tools and that is supposed to make it better yeah okay. I go back to the child at the grocery store. Which kind of mustard do you want, sweetie? Do you want it to have a hundred different mustards to choose from or three? And I think it is a great question to always ask though. Can it be done with a single agent? Can it be done with no agents? Of course, we were doing multi-agent RAG today, so we used multiple agents. Next question, is it possible to share variables like DICs, data frames, or any other between agents instead of just making them communicate with natural language? Yeah, yes, absolutely. So we can do that by passing different parts of state, different components of state. As you saw in this example, we only pass the last message into state, but we could add more things and add even more state. And I think that's going to be a decision that you need to make depending on your use case. But yes, the answer is you can absolutely pass. I'm not going to say whatever you want, because that's of course literally not true, but you can pass basically whatever you'd like. Okay. Nice, nice, nice. Okay. So when dealing with multi-agent RAG, it gets hard to cite or source responses in line. Is there an effective way to do this across all the receipt retrieved sources in line yeah okay so for citation that's a little bit harder uh you could add like a state that just keeps track of the various sources and citations and then populate those at the end in some kind of dump uh that would be the uh that would be the base way that I would want to approach this if you want to be sure that you're citing everything that you can. Some of these agents aren't going to produce citations because they're not really looking for those things. But yeah, with state, basically, you'd want to manage that context as you're passing it around. You can add that fairly straightforwardly. Okay. Can agents work in parallel? Yes, of course. Yeah. So the idea would be just to make sure I understand, like you can, some of these flows can be paralyzed, right? So if you need to search two different tools, you can search them at the same time and then synthesize a response once you receive a response from both of them, right? So that's already built in through LCEL. I believe it's coming to Landgraf soon, TM, but for right now, it's built into the LCEL components, and then I believe it's going to enter into Landgraf soon enough. Okay. And what are the techniques or design patterns to make our system faster and more responsive? This multi-agent setup can be potentially slow. Can't it? Oh, yeah, for sure. It's going to be slow. I'm not going to, you know, tell you that it's going to be fast. You can make it feel faster by using a lot of streaming, right? So streaming the responses is going to mean that the time to first token is very low, but it's still going to take the same amount of time to generate the full final answer. So it is going to be something that takes a little while, especially when we have like these six to seven different calls. Also one thing to touch on from that point of view. Right. This is where a tool and integration like Langsmith, which we didn't touch on in the notebook, but is easy to integrate, comes in and answers a lot of the questions we've seen in chat. How do we know how many tokens, how many calls? What path does it take? All of those can be added or tracked through through Langsmith. If you if you use that integration. Yeah. And I just want to sort of mention shout out to Garrett, big homie in our community. He's building deep writer and if you want to talk about how it's slow and potentially expensive to do multi agent stuff, He'd be a great resource for you to follow and to start a DM thread with. He's all about all of this and constantly building every day. So it looks like we've got a lot of other questions, but we are about at time. We will collect these questions and we will try to make a few posts in the week to come on LinkedIn. So give us a follow there. But that's it for today. We wanted to make sure that we end on time. We'll be back with more multi-agent stuff soon. You can count on that. Thanks so much, Wiz, for walking us through that. That was incredible. We will wait on publishing our first blog until we think it is truly dope enough. And let's go ahead and close it out for the day. If you guys enjoyed this and you don't know AI Makerspace yet, we'd love to see you on Discord real soon. We're building, shipping, and sharing with folks all the time. And we'd love to have you as a part of our community starting now. You can start learning for free of course on YouTube. We've got an open source course on LLM Ops that we taught last year. We look forward to open sourcing another course here very soon and we are always running our bootcamp courses. Our flagship one is the AI Engineering Bootcamp. It is an eight-week course that walks you through everything from your first LLM application through the patterns that you need to leverage and build up to multi-agent frameworks. We are starting our next cohort on May 28th. It's kind of a high bar and quite a bit of friction to get in. So apply now and start working your way through the AI Engineering Bootcamp Challenge. To check out more events, if you aren't familiar with us, check out our awesome AIM Index on GitHub. You get direct access to all of the code. There's always concepts and code with every event that you will join on YouTube. Again, we're back every week on Wednesday, same time, same place. We hope to see you again soon. Like and sub and in the meantime, keep building, shipping and sharing, and we will most certainly do the same. Have a great week, everybody. We'll see you soon.
Multi-Agent RAG
3,649
AI Makerspace
20240508
Discover how to integrate multiple independent agents to tackle complex problems effectively using the latest frameworks like AutoGen, Crew AI, and LangGraph. We'll dive into the innovative multi-agent systems, particularly focusing on the shared scratchpad approach in LangChain, and demonstrate building an advanced Agent Supervisor model. This system enhances problem-solving by coordinating agents, each with their own scratchpads, under a supervisor that manages the final outputs. Whether you're a developer or just fascinated by AI's potential, join us to learn, interact, and get hands-on with the future of collaborative AI technology. Click now to be part of this journey into multi-agent systems! Event page: https://lu.ma/agentrag Have a question for a speaker? Drop them here: https://app.sli.do/event/wPDemMAc9nzV96DFmBzXz5 Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/6NNYtu7MiSUcnWAh6
2024-06-10T01:54:11.914263
https://www.youtube.com//watch?v=xmfPh1Fv2kk&t=1s&ab_channel=AIMakerspace
Hey, whiz, as we've been saying in class, as goes retrieval, so goes generation when it comes to rag. Is there like a right way to do retrieval? I don't know about right way, but there are certainly lots of awesome ways. Yeah, so once we get like a RAG system set up, we want to take to the next level. And how exactly are we supposed to do that? Well, it depends a lot what you're trying to do, the kind of data you have, the kind of information you want to retrieve. It turns out there's a lot of awesome ways to do it. And as always, we got this performance versus cost thing. Methods, methods, methods, algos, algos, algos coming out all the time. Do we need to care about all of them? Or are there just a few that we really should be focused on today for our tool belt? I think, you know, as it stands right now, there's a few we should be focused on making sure we have in our tool belt. Absolutely. Rock it. Yeah. All right. That's what we're going to do today. We're going to try to break down exactly which of these you should know about. And we're trying to give you some context for you to optimize your own context. Sound good, Wiz? Sounds awesome. All right. We'll see you back in a little bit man so today we want to talk advanced retrieval everybody and welcome this is the sort of second step of our kind of advanced rag sequence we talked about chunking last week and you'll realize today if you joined us for that one retrieval really builds directly on the back of chunking i'm excited for this my name is greg i'm co-founder and ceo of ai makerspace that's chris the wiz alexia co-founder and cto of ai makerspace and we're pumped to have you with us today throughout today's presentation if you have any questions along the way, please throw them into the Slido link that will drop in the YouTube chat now, or just go ahead and smash the chat. Let us know what you're thinking, and we'll see if we can answer each and every question along the way today. We've got quite a bit of background to get into today, so let's go ahead and rock and roll. Advanced retrieval, that's the name of the game today. And as we align towards today, we want to understand that it's not exactly a figured out science, this idea of how do you do retrieval in any given context. So we want to understand, first of all, how retrieval in any given context. So we wanna understand, first of all, how retrieval fits into RAG, and we're kind of zooming in on specific RAG pieces, but we wanna have the big picture in mind when we do that. We wanna be able to look at the different algorithms and compare performance across them. And importantly, we want to be able to understand the fine lines between things like chunking and retrieval and ranking or re-ranking. There's quite a few methods, but we have to set the context of our discussion first. We're going to talk about those methods. We're going to show you re-ranking. We're going to discuss it a little bit. But really, at the end of the day, this is about trying to get the big picture and understand where advanced retrieval methods fit in. We're going to walk you through what you need to know to get there now. Let's start with RAG. One of the common threads that we're going to run through today is we're going to be dealing with one of our favorite types of data, some movie review data, and we're going to be doing John Wick movie reviews. Now, one of the things that you could easily do is you could, for instance, use a simple chat with your PDF style model, and you could upload, let's say, some Wikipedia data that you just printed from the top page on, say, John Wick 4, and you could ask some questions. wick for postponed due to covid and how long for instance and we can use this simple retrieval chain i don't have any any information was it postponed due to covid and we we can read the page and we can see that it actually was postponed was the movie postponed due to covid and we can we can sort of start to understand a little bit about how important retrieval is because we can look at john wick 4 and we can look at the wikipedia page and we can see like it clearly was postponed due to covid the original we're just not able to see the screen that you're sharing right now yeah thanks a lot man and so what we'll what we'll want to do is we'll want to go ahead and So what we'll want to do is we'll want to go ahead and do this full share here. So what we want to do is we want to sort of say, okay, if I want to do a little John Wick 4, If I want to do a little John Wick 4 upload here, and I want to say, okay, was the movie postponed due to COVID? We can sort of see, like, yes, indeed it was. And we can see this directly on Wikipedia, for instance. but we can also see it directly in our source data and we can go and look at the different sources now we got three we got four sources here and not all of them mention covid in fact only one of them the one at the end mentions covid this is important because we're still returning source 0, source 1, source 2, and the question is, are these really adding value? We can look at exactly what's going on in this application, and we can look and see, for instance, what exactly is the chunk size that we're using in this application. We can see it's actually 1,000 out of the box, in this application. We can see it's actually a thousand out of the box and it's a zero or it's a 100 overlap. So that's kind of, we talked about recursive character text splitter last time. The question is, is this really enough? And the answer is potentially, you know, no, it's not enough for any given context. And so what we want to do is we want to make sure that we're returning all the right information for any specific thing that we want to be able to do. And we want to make sure that all this is towards the end in mind we have of avoiding hallucinations, making our answers fact checkable, returning the right reference material, and improving our answers as a result. But we want to avoid redundant reference material. We want to avoid useless reference material. We want to avoid basically everything in our context window that's not relevant because we're just pumping tokens in that aren't going to help us, that's additional cost. And so to really understand this process, we want to make sure that we have the correct overview of exactly what RAG is and where advanced retrieval fits in. So we can break down RAG into two key components. The first one is dense vector retrieval. This is, of course, the retrieval piece. And the other one is in context learning. What exactly are we putting in the context window? Both of these matter when it comes to looking at advanced retrieval methods. because we ask a question, that question gets turned in to a representation and embedding space. We search for similar stuff in embedding space to our question, for instance, in the example we just saw, and the similar stuff is returned. Now, what does similar mean? Well, we saw not necessarily exactly what we might expect, although some of the chunks are very relevant, not all of them are. When we find those similar chunks, we can set up our prompt template and we can return the reference material from the chunks in natural language. So this right here is the retrieval process using dense vectors. The piece where we're actually improving our generation is the part where we augment our context, we augment our prompt. This is in context learning. So this is sort of the RAG process, but it all happens prior to the generation. And the generation is where we actually get our better answer and we can, you know, yay, be happy that RAG was the right solution for us. So as we think about this R in RAG, we think about this retrieval process, we really need to break this thing down. We ask a question, we search this vector store, this vector database, this index for stuff related to our question that's similar in vector embedding space, we return that stuff we found in natural language. And the first piece of this is the piece where we're actually chunking and building our vector store. Okay, we need to set this up so we can ask a question in the first place. When we look at vector stores, there's many of them. We're going to use Qdrant today, and it's one of the hottest ones out there. If you're going to pick one off the shelf, go ahead and pick this one. But regardless of which ones you pick, it matters what vectors you're storing and why. So to create any vector store, you're going to have to take your documents. You're going to have to chunk them. You have to create embeddings for each chunk. And those embeddings are what gets stored. Chunking is simply breaking things into small pieces. And as we've said before, the last event we did on semantic rag, it's so important to remember that whatever gets chunked gets retrieved. And this is where we need to understand the fine line that exists at the vector store. Because how we chunk is one piece of the puzzle, but what we know about the chunks is a piece we're going to continue to explore today. We talked about chunking methods previously. Today we're going to continue to explore today. We talked about chunking methods previously. Today, we're going to leverage the baseline recursive character text splitter best practice chunking strategy that really takes the fixed size approach, the chunk size and overlap, and augments it slightly. Augments it so that we can make sure that we're not stopping in the middle of words. Rather, we're more likely to stop on double new lines, which would be sort of a new paragraph, perhaps a new chapter in a novel or single new lines. Alternatively, we wanna chunk where we have a space. We wanna really avoid chunking mid-word on character count, if at all possible. This recursive text splitting allows us to get close to our chunk size, but do a little bit better job. So here's an example from an Alice in Wonderland book, Down the Rabbit Hole, and we see we chunk on a double new line here and we chunk on a space here. This is with a simple chunk size 200 and overlap zero example at the beginning of this novel. Now there are more sophisticated ways to do this chunking. And one of those ways is to look at the meaning of each chunk. The meaning of things semantically is something we want to be leveraging during chunking as well as during retrieval. If we can afford the additional computation cost to see that improvement in performance. We talked about semantic chunking last time. It worked very well. We talked about semantic chunking last time. It worked very well. What we're going to talk about this time is we're going to talk about this retrieval piece where we're returning the stuff. In the retriever that is essentially a wrapper for the vector store, we're returning natural language and the chunking happens upstream where retrieval happens downstream. We can kind of look at these two things as separate, although keep in, there is a fine line. And as we mentioned in the beginning here, as goes retrieval, so goes generation. And when we do retrieval right, we get better at generation. better at generation. That's because we're augmenting the context. We're giving the model more high-quality context to be able to do its generation with. This idea of in-context learning goes back to the GPT-3 paper called Language Models Are Few-Shot Learn learners. As we move from instruction only zero shot to one shot with big and performance enough models, we get a huge increase in accuracy for our generations across many different tasks. And so this idea of in-context learning really comes from scaling up the LLMs big enough to see prompting become a very powerful pattern that we can use without fine tuning. And of course, when we do prompt engineering, we want to make sure that we have those clear and specific instructions. We're giving it the best possible context. And this is our focus today. Specifically, we're focused on the retrieved context. And another way to think about this is as you prototype, you want to always start with prompting. You can always start with prompting, but as you then move beyond prompting, you're often picking up RAG as the next pattern to leverage. This is simply this context optimization that we're doing here. When we move from prompting to retrieval augmented generation. People often ask, well, do I even need RAG because I have these super long context windows? Well, what do you mean by optimization? Are you trying to optimize how much context you can actually put in? Or are you trying to optimize how high quality each piece of context is? And as a result, optimize the efficiency of your inference, as well as the cost, if it's, say, on a per-token basis. And so this context optimization, what we're focused on is we're focused on this piece specifically, getting this right. We're focused on this piece, We're focused on this piece, getting this right. And in fact, this is the whole shebang. Whether you talk to Langchain, who's really focused on enabling LLM applications that leverage context and reasoning, or Lama Index, who will tell you that there are data framework for LLM applications that benefit from context augmentation. Both chunking and retrieval, both of them affect this context. Now, what is good context? Well, there's a number of ways we can measure this, and we've talked about this before. This is a leading framework called the RAG assessment framework. We encourage you guys to check it out if you haven't yet. But ultimately, what good context means is your problem as the AI engineer building this system or the AI engineering leader. If you want to look at Ragas a bit closer, we'll see it in action today. We encourage you to really get this retrieved context right, we want to get the most relevant information. We want to minimize redundance. We want to make sure that it is the stuff that's going to help us do a better generation. It's meaningful and relevant. And this is what we're focused on when we do advanced retrieval. So today we're going to focus on looking at this through the lens of IMDB movie review data from the John Wick movies. We're going to take this data and we're going to create a vector store. Now, the way we do this kind of matters as well as the way we engage with it. And so let's talk about the different stages of retrieval. We've already discussed this, but this is sort of broken down to a slightly more zoomed in level. When we get our data from IMDB, we're going to load it, we're going to transform it. This chunking, this deciding how to tokenize, how to turn it into embeddings before putting it into the vector store, these are all key components. You might say chunking happens here and retrieval happens here. But we would put forth for you today that there is a fine line that exists at the vector store. What is this fine line exactly and how should we think about it? Well, this is where we can look at each of the most useful advanced retrieval methods and get some insight. So first off, simple, super naive, regular retrieval. Let's base on ourselves with that. Then let's look at parent document retrieval, followed by self-query retrieval, followed by time-weighted retrieval, followed by contextual compression retrieval. And hopefully we can start to see this fine line come into focus for us. When we're doing naive retrieval, all we're doing is we're finding nearest neighbors through a simple cosine similarity. It's a simple dot product. And we're returning naively anything that's similar in embedding space. As we saw in the John Wick example in the beginning, not everything is necessarily super duper relevant to the question that we asked. So maybe we can do better. And that really motivates this entire space. When you think of naive retrieval, there are a few things you can tweak, but it's kind of what Langchain calls vector store backed retrieval. It's just simple vector store, cosine similarity based. You can set the similarity threshold. You can also specify that you only want the top K results back. You don't want 50 or a hundred. You want, let's say the top five or 10. This is a great place to start. And if you're prototyping, you should start here. But after you've prototyped, after you've set the baseline for your evaluation, after you've started to make sure that you understand what the performance is in the current state, and you wanna take it to the next level, you can start investigating things like the parent document retriever. And the big idea here is that small docs, small chunks, they're pretty good because they're more precise. They really are a little bit more accurate in terms of semantic meaning and what is going on in that particular part of the document. But big chunks are good too, because, you know, there's a lot more context, you know, and we're back to this word here. So think about a sentence is always within a paragraph. A paragraph is typically within a chapter or a section, chapters within a novel. I mean, there's a hierarchy to this. And this idea of leveraging the hierarchy is what the parent document retriever is all about. And the technique here is pretty simple. You search for the small chunks and you return the big ones. So it's interesting here, right? Because we're still talking chunks. We're doing retrieval. I thought retrieval happened after chunking. Well, what needs to happen here is that during the chunking process, we actually have to generate metadata for each of the parent-child chunks. Specifically, the child chunks are the thing that are real. They're in the vector store. Parent chunks are held in memory. That's really a separate issue. But you want to think of the child chunk as knowing who his parents are. And so what you do is you look for the child chunks and you return the parent chunks. But you have to have the metadata created for each chunk at the point of creating the vector store. This metadata is essential. Having this metadata created at this point allows us to do an advanced retrieval technique. And so these two things are really inherently linked. As we can see in multiple methods, when we look at the self-query retrieval, for instance, self-query is essentially a way to think about this is to use this text-to-SQL type functionality, meaning kind of chat with your SQL database a little bit. You want to think about this self-query as being able to chat with your database a little bit. And the way that that's done is through, again, metadata filtering. What did Barr say about Foo, for instance? We're going to filter on some metadata we have. This is sort of too general to make it kind of useful to really understand in terms of what we're doing today. But what we're able to leverage in our example with the John Wick movies is if you look at a simple movie review, we have metadata on each of our chunks. Let's say each chunk is an entire John Wick review. They're not very long, so we can make them just one size. We also have star rating. We have the title. We have the reviewer, the date, the actual review text. That's our chunk. And then how many people found it helpful? There's lots of metadata that we can draw from. And so if we give our retriever permission and access to look at that metadata, we can potentially very much improve our results. And this is a very nice, very clean, sophisticated way to take retrieval to the next level. Again, it's all about that metadata that we're creating at the step that we're generating our vector store. Thinking about that fine line, we can also do things like time-weighted retrieval. Time-weighted retrieval out of the box essentially says, what was the date of the last lookup that was similar to this? So kind of the most recently accessed is the big idea. Is the most useful with the kind of the most recently accessed is the big idea is the most useful with the kind of idea that if you're accessing something very frequently it's probably pretty useful now for us in this movie review example we're gonna want to answer based on what movies came out most recently in the spirit of the most recently accessed data. So we're going to modify it a little bit and you'll see how this works. And that one's pretty straightforward. Time weighted retrieval. It's probably kind of what you thought. Maybe not exactly, but you're sort of looking at what is most recent to retrieve, potentially very useful for your application. And then finally, we can think about contextual compression retrieval. Now, this is exactly what it sounds like. We're compressing the context. And so when we think about this returning of the natural language, the thing that's interesting here is we're doing this to the natural language, the language that we return specifically, the reference material to the context. So, you know, you can think about compressing a paragraph down to a few words or many chunks down to just a few chunks. And this compression idea, especially when we think many chunks down to a few chunks, could start to really give us insight into this idea of what is the proper rank order of chunks and how many chunks should I have if I want to let's say If I want to, let's say, minimize redundancy, maximize signal to noise, and really get the best possible generation, best to be defined by you and your application. Well, this is kind of heading us down the path of the re-ranking algorithm. And re-ranking is potentially one of the easiest things to apply, and we would highly encourage you to try it out of the box, as we'll show you in today's build. Again, we're going to use the data from John Wick movies. We're going to use Langchain and a Qdrant vector store for this. We're going to use OpenAI models for both embeddings and for our chat model. And we're going to walk through each of the methods of retrieval from naive to parent document to self query to time weighted and ultimately to contextual compression and re-ranking get ready everybody it's time for advanced retrieval for rag with the whiz. Let's walk them through the notebook, my man. Oh, yeah. Okay. So advanced retrieval with laying chain is conceptually maybe difficult, but, you know, the actual code, thanks, laying chain, pretty straightforward. So what we're going to do today is we're going to do a couple things. We're going to look at this advanced retrieval system, how we, you know, use it, how we think about it. And then we're going to see it on a number of different RAG pipelines. Now, we're going to be exploring a lot of different options. We will be releasing as well a notebook that's going to talk about uh how to assess these things against each other we're not going to go over that in the uh in the demo today just because it's going to be rather dense but the idea is that if we pit all these things together we should be able to figure out which one is best for our use case the first thing we're going to want to do is just grab straight up uh dependencies. So we do need our dependencies. We're going to use Langchain, Langchain OpenAI, and Langchain Cohere. This is because we're going to be using OpenAI as our LLM and our embedding model, and Cohere as our re-ranker when we get to contextual compression. We're also going to be using Quadrant today as our vector DB just because I like it, and Lark because we need it for our self-query retriever. We're going to provide our open AI key and cohere API key. So far, so good. This is all the normal stuff. So now we're going to collect some data. And what data do we need? Well, we need John Wick review data. So we have available on our GitHub a bunch of different John Wick review CSVs, so JW1234 that you're welcome to use. You can create your own as well. If you do create your own, just make sure that the metadata is aligned properly. That will be a necessary step. So we get all this data. That's awesome. Okay, once we have all this data, we're gonna wanna do something with it. And what we're gonna do is we're gonna create documents out of it. Now we're gonna get kind of a little bit manual here in our document creation because we do want to make sure that we have all of the appropriate metadata. So what we're gonna do is we're going to do is we're going to add a bunch of metadata to our CSV documents. So we're going to leave the review itself as its own, you know, that's like the base content, right? Then we're going to add these metadata columns of review date, review title, review URL, author, and rating, right? And then we're going to add the movie title for each movie. Now we're going to be kind of naive about this. We're just going to call them John Wick 1, 2, 3, and 4. They technically have different names, but good enough for right now. We're also going to cast our rating to an int. If it exists, else we'll just make it a zero. If they didn't provide a rating, you can think about this a little bit more and make a different decision here, or maybe get the, did they like it or not from some kind of classifier. So we could build that and then determine the actual review score, but we're just going to set it to zero because it's straightforward. And then we're going to do this last accessed at. We're kind of cheating here, right? We're just kind of declaring when these were last accessed at. And we're going to say that the first movie was last accessed, you know, basically three days ago. And then the last movie or the fourth movie was accessed today or now. And so the reason we're doing this is to kind of illustrate what the time-weighted vector store is going to be doing. Obviously, this is not literally what it's for. It doesn't really care when the documents were added. It more cares when they're accessed last so that, you know, very, very frequently accessed documents are going to be kind of boosted. But we're just going to use this to illustrate what's happening behind the scenes. So let's just make sure that our documents worked out. And indeed, they worked out. We have our page content, which is our review. And then we have a bunch of metadata. We have the source. This is the CSV it's from. We have the row content, which is our review. And then we have a bunch of metadata. We have the source. This is the CSV it's from. We have the row it's from. We have the date that it's from. We have the title. We have the URL. We have the author. We have the rating. We've got the title of the actual movie. And we have this last access stat. So let's go. Next, we're going to set up our quadrant vector store. And I did confirm it is called Quadrant, not Qdrant. So I've been calling it Qdrant this whole time. It happens. We're going to realize that it's called Quadrant now and apologize deeply to the company. But other than that, we need an embeddings model. We're just going to use TextEmbedding3.small because it's small. It's cost effective. Easy peasy. Small, it's cost effective, easy peasy. Yes, Vignesh review title was part of the original data. So if we look at the kind of original CSV here, so if we go to the data, I'll zoom way in here so we can see it. You can see we have these columns that already existed. We have this row, which is our index. We have a review date, author, rating, review title, and review. All of those were already populated for us. And so we're just indicating in the CSV loader that we want those to be metadata. We do not want them to be part of what's called page content. So that's why we did that. Great question, though. So we're going to create our embeddings model and then we're going to create our vector store, which is going to be quadrant from documents. And it's easy enough, right? We're going to use the location being memory, right? So this is an in-memory vector store. You can use a hosted self or otherwise quadrant vector store, but we're just going to keep it simple with memory so that no one's got API issues. And we're gonna name the collection John Wick because why not? I mean, it's about John Wick. So the first chain we're gonna create is the naive rag chain. Since we're focusing on the R in rag today, we'll create our retriever first. We're gonna have a retrieval. We're gonna retrieve 10 documents. Now this is just gonna use the simple cosine similarity. That's it, right? That's all this is doing. It's looking at your query. It's looking at all the documents or a subset of the documents because it's using approximate nearest neighbors. And then it's saying, these are the 10 most related. Here you go. So easy peasy there. We love to see it. We are retrieving 10 documents, which is a lot of documents. And we're doing this because we know down the line we're going to use contextual compression to weed that down to the smallest number of documents. Right. So we're going to we're going to compress that 10 down to a to a smaller number. Then we put the A in RAG, which is just our prompt. We're just kind of sticking with a simple one, right? The star of the show today is R. So A, you know, just going to be kind of normal. And then G, the generator, going to be kind of normal, just using a GPT-35 turbo. It's a heck of a model. We're going to create our naive retrieval chain, and you'll see we've got this big old blob. This is a familiar blob to some of you if you've been following along with us as we've gone through everything. But, you know, the idea is, I've put on comments, I should help explain what this is doing. Because we're, because we care so deeply about retrieval, right? We really want to make sure that we are getting the documents that we are retrieving. So we have to make sure that we populate those documents at the end. We can't just get the output because that's only going to let us see what's happening end to end, which is useful, but not super useful. So we're going to need to do this kind of slightly more complicated pattern to make sure that at the end, we get both our response from the model and we get the context that we retrieved, right? Makes sense. We want to think about retrieval, so we're going to retrieve the documents and populate them. Now, we're going to see how this symbol chain does in a few different prompts, right? Put a little comment here. You might think that we've cherry-picked prompts that showcase the individual skill. We sure have, right? So that's, you'd be correct. The first query, did people generally like John Wick? And we're just going to look at the response because only Ragus really is going to care about the context in a way that's meaningful. So did people generally like it? And yeah, it seems like they did, right? Did any reviews have a rating of 10? If so, can I have the URLs to those reviews? And we get a rating of 10. We get a URL that has a rating of 10? If so, can I have the URLs to those reviews? And we get a rating of 10. We get a URL that has a rating of 10. That's great, right? Let's go. And then what happened in John Wick? And then we get the, basically, this is the plot of John Wick 1. Okay, so that's important to keep in mind when we get into a later retriever. This is the plot of John Wick 1. Okay, so that retriever is like, it's fine. It does the job. I mean, we're happy about it, right? Nothing's super odd. The results are not spectacular. All right, fine. So what about parent document retriever? Well, as Greg said, basically what we're going to do is we're going to take each raw document and or a large chunk of document, okay? Then we're going to store those larger chunks or raw documents in our memory store. So the memory store is not a vector store. It's just kind of chilling in memory, right? So we're not going to do any semantic similarity searching on those big chunks or those parent chunks. We're going to chunk each of those parent chunks into smaller documents, and we're going to associate them with the parents, and then store those in a vector store. Okay, so the idea is we're going to take one big document, turn it into a bunch of smaller documents, and associate it back to the big document. Okay, then we're going to put those into our vector store. So how this looks is we're going to search for those smaller chunks that are in our vector store, but we're going to return the parent chunk that they're associated with, right? So let's say we have a parent chunk that's been split into four different chunks, and our semantic similarity search returns that, you know, three of the four of those chunks are the most related documents. We're just going to return the one big parent document chunk, right? We don't have to return those smaller child chunks. And this is basically the way that we're going to think about this. I got a question in chat. Is RAG only for Q&A or can it be used effectively for summarization? You can definitely use RAG to augment summarization pipelines. It is not just for Q&A, though, of course, that is the cleanest way to demonstrate a lot of its power. So how do we actually make this whole child chunks, parent chunks, associate, all that? Well, basically, we're going to first just define our documents to be our parent documents. Then we're going to implement a recursive character text splitter. And that's going to be our child splitter. So that's what's going to turn our parent documents into our child documents. We're going to, again, create a quadrant vector store. This time we're going to do kind of the verbose way to do this. So we're going to create a quadrant client. We're going to add a collection. We're going to define the vectors that we need. So this is going to be the size of the text embedding three model from OpenAI. And then we're going to define our distance metric. Then we're going to create that vector store with quadrant. We're going to point it to full documents. And we're going to point it to full documents and we're going to point it to uh you know the correct embedding model and the correct client easy peasy then we're going to create a uh parent document retriever we're going to have our vector store as our parent document vector store our doc store as store this is in memory store and our child splitter as child splitter easy peasy thanks laying chain and then we add our documents. Now, as we add our documents into this retriever, what's going to happen? It's going to get a chunk. It's going to turn that into a bunch of small chunks and associate them through metadata. So now when we actually call this, under the hood, the content is going to be, we're going to get the full chunks back, but we're going to search for the smaller chunks, and that's it. And again, I mean, it looks fine. It looks like it's not too much, you know, so, okay. So far, so good. We're liking that. I assume that, a question from chat, I assume that in production, you'd want to store parent docs and something other than memory on disk vector. Yeah, that's correct. We can start however we want. Probably just leaving a bunch of the memory is less than ideal though. Okay, so that's parent document retriever. Makes sense. Seems cool. We search for small, we get big. The idea right here is that what we want is we want to search little pieces of information because they're likely to contain single ideas, right, which is likely to be best captured by the embedding process. But we know that, like, even if a sentence is very related to our query, likely the sentences or the structure or the text around it is going to be useful context for that small piece of information. And that's the idea of parent document retriever. For self-query, it's a lot more straightforward. We have all of this amazing metadata, right? We have movie title, review date, review title, review URL, author, rate. We have all this stuff and we're just not using it, right? Like we're just not using it at all. And so instead of not using it, which seems just not using it at all. Instead of not using it, which seems maybe not great, we should use it. That's what self-query does. It helps to understand the structure for metadata and then make filtered queries based on that metadata. Whenever you see this description, you want to make sure that you write a natural language description of that because it might be used by the LLM, right? And we are going to define the types so we know what kind of filtering we can do. The kind of filtering we can do is different if it's an integer, then if it's a string, then if it's a date, and so on, etc. We also want a default description. So this is the actual page content, right? So if you remember, our page content is just the review. So we're going to define that as the review. And then, of course, we're going to use GBT35 Turbo. We're going to pass in the vector store we already created. This is the same vector store that we used in our in our actual original naive implementation, right? Because all that metadata is already in there. We don't need a different vector store. We just need a different layer on top of it. We're going to pass in that document content description and then all of our metadata field info, which is here. And then when we ask questions, you know, did people generally like John Wick? Okay, we get kind of the same thing, right? you know, did people generally like John Wick? Okay, we get kind of the same thing, right? We get, yes, they generally like John Wick. But if we ask a question like, do any reviews have a rating of 10? If so, can I have the URLs of those reviews? We get a much more, you know, correct response because we're actually able to filter based on movies that are reviews that have a rating of 10 since we have rating as one of our metadata fields, right? So we get a much better answer here. Just by looking at it, you can tell it's a better answer. And then of course, you know, what happened to John Wick? I mean, this is not really meant to help this perform better. And so it doesn't really, at least by looking at it. So that's our self-query, basically smart filtering. You love to see it. Time-weighted vector store, this one's pretty straightforward as well. We want to penalize pieces of context that aren't often used. And so what we're gonna do is we're going to set this up the same way we did before. This is just creating a new quadrant client. So we've already seen this. You know, this is just the verbose way to do it. We're going to basically scale our semantic similarity scores based on this little formula here. The idea being that the higher we put our decay rate, the more aggressively we're going to penalize our old data, right? So the basic idea here is that if the data is very new, its score is going to be higher. If the data is very old, its score will be lower. And its score is just a combination of these two things, the semantic similarity, sorry about that, the semantic similarity, and this little formula. Now you can set a aggressive or a non-aggressive decay rate, right? If you set the decay rate to be close to one, that means that it's going to very quickly return to just kind of base semantic similarity. If you set the decay rate very close to zero, it's going to very quickly return to just kind of base semantic similarity. If you set the decay rate very close to zero, it's going to take a very long time for it to taper off over the course of the document's lifetime. And so that's it. The way we set this up is, again, very straightforward. We're going to use a time-weighted vector store retriever. We're going to pass under decay rate as a fairly aggressive 0.6. And then we're going to set this k equal to 2, so we retrieve two relevant documents for this, and then we just have to add our documents, and we get a little confirmation that these documents have been added. Sounds great. Now, when we use our time-weighted vector, our retrieval chain, right, so our time-weighted retriever here. You can see, did people generally like John Wick? Yes. People generally like John Wick 4 based on the reviews provided. Well, that's interesting, right? Because we didn't ask about John Wick 4, but because all of that John Wick 4 data is weighted more highly, right? We get that John Wick 4 data. Do any reviews have a rating of 10? If so, can I have the URLs? I'm sorry, but there are no reviews with a rating of 10 in the provided context because there were no reviews of John Wick 4 that were rated a 10. And again, what happened in John Wick? In John Wick 4, there is a lot of action, intense fight scenes, seems to impress some viewers with high energy of fast-paced sequences. Again, the idea is because we waited for higher, it's retrieving John Wick 4 information. And the last one that we have is contextual compression. Contextual compression is, again, the words sound intimidating, but it's really quite straightforward. We retrieve a lot of documents that are very likely related to our query vector, and then we compress those documents into a smaller set of more related documents. More here meaning that they're more related, they have higher relatedness. And that's it, right? You can do this in a number of different ways. So you can actually shrink the context itself. So the text only extracts the relevant text out of each of the contexts. Or you can shrink the number of contexts. So we're going to be using this re-ranking method, which is working on that number of contexts access. And basically, we're going to make a basic retriever, just like we had in our naive example. And then we're going to add this compressor, aka our re-ranker for this example. And the way that looks is straightforward. Again, we define this compressor, and then we define this compression retriever. and then we can we define this compression retriever Bob's your uncle there you go again when we pass information in we ask questions we get good responses you know we get basically what you would hope to get and what we're going to do is post in the YouTube comments a notebook that describes all of the all of the performance as defined by ragas um but uh with that i've we've gone through a lot of content so i'm gonna pass you back to greg uh don't forget to like uh comment subscribe ring the bell notification i know it's kind of a little bit uh a little bit funny but it does does really help because we go live every Wednesday, and we love to see you out here. So with that, I'll pass you back to Greg. Yes, awesome. Thank you so much. Wiz, in case you missed that last piece, that last little fine line, I just want to double-click in on this re-ranking idea. This is maybe the easiest thing to pick up and use straightaway as you start going after naive and into the next sort of advanced piece. And to understand really exactly what re-ranking is doing, we can just recall this semantic rag piece we discussed last week, where the hypothesis was that we could use embeddings of individual sentence to make more meaningful chunks. Well, when you look at Cohere's RERANK algorithm, the idea is that you can optimize your retrieval to improve generations through this quote semantic boost, which is fundamentally sort of giving us a more meaningful list of chunks retrieved. Semantic meaning is at the center of all things NLP, and it's no different here. Finally, it's important to note as a takeaway and a quick trailhead that we can use embedding models directly for re-ranking. Now that we've got this screen share dialed in, I can show you directly on the massive text embedchmark Leaderboard, you'll see there's a re-ranking benchmark list. So I encourage you guys to check this out if this is something that you're potentially interested in looking more into. And finally, we have made our way through quite a lot of content and advanced methods today. We see that there is a fine line, not just between contextual compression and re-ranking, but there's also a quite fine line between the stages of chunking and retrieval that meet at the vector store. Keep this fine line in mind as you're building and you're designing how your data is chunked, what metadata you're gonna leverage to do retrieval. And given your specific questions, given your specific things you want to be able to answer or summarize, you're gonna potentially wanna choose different metadata, chunking strategies, retrievers. And again, we'll drop the quantitative analysis we did. That was just too much for today in the YouTube comments. If you want us to go deeper on any of this stuff, we'd love to hear from you. But otherwise, we've got a few minutes to answer questions that we'll go ahead and get started with from the Slido. Whiz, that was a lot today, man. That was a lot. Okay, maybe we overshot the mark a little bit, but we've got some questions. Paolo asks, hey, nice hat. Thanks, Paolo. Does chunking work for tables in documents? Mute it. And we can't hear you, Chris. Oh, sorry. Yes, I think it can. It kind of depends on what you mean. I would say loosely, though, we don't really want to chunk tables. We would like to keep them together, right? Because they describe a kind of single way to think about something, right? So if you had a very massive table that you wanted to chunk, you would kind of want to retain that structure. So I'm not sure that we would want to chunk tables, but you certainly can. Yeah, why not? Okay. Okay. Manny asks, what's up Manny? When returning results, is there a command for the prompt template that can make the LLM return citations it has with the probability of accuracy in percentage? I guess I'm not quite sure what accuracy is in this case. So kind of. What we can do is we can say things like, so in the case of this notebook, we're returning two things. We're returning the response and the contexts. So basically, that's the citations that it used. That context is what it used to give you the actual original answer. So that's why we set up our chain in that way. When it comes to this probability of accuracy in percent, what we can do is we can see whatever measure we use, we can forward that score, right? So like if we're talking about cosine similarity, we can forward that score. The meaningfulness of that, or a way to directly port it to like percentage accuracy based, that part's a little bit less clear, but it's absolutely true that we can forward the score from our retrieval pipeline, even though it might not mean much in a vacuum. Okay. Manny has a couple either or here. Any vector store specifically required to achieve these retrieval methods or can we do an independent vector store? You do have to use a vector store that supports things like metadata, metadata filtering. Absolutely. Yes. So Quadrant, Chroma, kind of all of the normal ones, even Face, as long as you set it up correctly with an index. So you can make a vector sort of powered by face, but it is not going to work out of the box with every single vector store you can imagine. Quadrant, shout out to quadrant. Learn something new every day. Yeah. All right. Got it. And then can we do both of the, all of these methods in both Langchain and Lama index today? Yeah. Okay. All right. Cool. Cool. Then Richard asks, which are the best rag techniques when we have an app that will add new documents all the time and which for very long documents? Yeah. So when we're adding a lot of new documents, you could see something like the time weighted vector store being useful, right? The idea is that you could add documents to it, and those would be the most recent versions of the documents. So if you're adding, say, blogs to your vector store, right, you could weight the new ones higher. And obviously, that's going to depend on how people interact with them, but that kind of thing is going to work very well. Otherwise, it's up to you about, you know, how that kind of thing is going to work very well. Otherwise, it's up to you about, you know, how that information fits in. You can do, you can add it with specific metadatas and you self-query, you know, just depends on what you need to do with new documents. You know, is it that new ones are more important? Is it that new ones should be categorized based on existing categories? You know, you get, you get the idea. And for long, uh, documents, it's, they're, they're all going to be great. Uh, obviously we can retrieve less of them depending on our context window. Um, but, uh, you know, long documents, something like contextual compression is probably going to be very useful for you as you're going to be able to squeeze out just the valuable juice from each of those, uh, those long chunks or documents. Okay. And then, you know, I're going to be able to squeeze out just the valuable juice from each of those long chunks or documents. Okay. And then I'm going to ask one of these, we got a bunch of straight classic questions, fine tuning, Langchamber's Lama index, et cetera, et cetera. Tell us more about why you chose Quadrant. What are the technical reasons here people want to know it's just so good uh it's very efficient it's very good big scale uh there you go that's that's like i mean quadrant will will will be good for you from when you have one uh you know daily active user to when you have 10 million yeah you know also the most recent company to raise money so that says anything to you it says something to me um go quadrant all right uh if you have more questions please uh throw them on to the youtube comments after the session we will get to them thank you so much everybody uh wiz thanks for walking us through that heck of a sesh today all right time to close out and thank you so much for walking us through that heck of a session today. All right, time to close out. And thank you so much for joining us today. If you enjoyed the session, then you might really enjoy continuing to join us every week. We're back on Wednesday all the time. You can join us same time, same place. We'd love to see you both now on LinkedIn, as well as on YouTube. And if you also want to go a little bit deeper, maybe check out joining our Discord channel. We almost have a thousand people in our Discord and it's pretty popping most of the time. We've got a lot of great projects people are working on. If you're looking for one, it might be a cool place to make some connections. And then if you want to start really taking your game to the next level and you want to start for free, we recommend checking out our open source LLM Ops course. It's free on GitHub and on YouTube. You can get all the boilerplate code and concepts you need straight through just a few videos. You can do it in really a day or two and get that really base of LLM ops. If you're ready to truly accelerate and take it to the next level, figure out all these details, Langchain Lama Index, what's fine tuning? What's the deal? How do I make these decisions? Maybe check out and consider applying to our AI engineering bootcamp. We just kicked off cohort two and cohort three is coming up in May. We'd love to have you if you're interested in getting more one-on-one time with us, with peers that have graduated in the past, getting access to hiring partners and learning the core concepts and code of AI engineering today, then it might be a good option for you. And that's a wrap for today, everybody. Any feedback that you have, we'd love to hear it. I know we had a couple of snags today and we'll continue to improve on our end as we keep building, shipping and sharing. We hope that you will do the same to build, ship and share something awesome this week and tell not just everybody in your network, but maybe even everybody in the AI Makerspace community. So until next time, keep getting after it, keep building, shipping and sharing, and we'll do the same. See you soon, everybody. Bye, guys.
Advanced Retrieval Methods for RAG
3,671
AI Makerspace
20240411
In this event, we will break down the retrieval algorithms that AI Engineering practitioners should know and have at hand within their toolbox. Algorithms known to provide greater precision and results at the retrieval step of RAG include the Parent Document Retriever, Self Query Retriever, Contextual Compression, and Time-Weighted Vector Store. RSVP: https://lu.ma/retrieval4rag Have a question for a speaker? Drop them here: https://app.sli.do/event/3eFnpQg7xrgcnb3TgMGQL6 Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/jKdAm5kLb4fRMa8UA
2024-06-10T02:00:11.043606
https://www.youtube.com/watch?v=dt1Iobn_Hw0&t=1s&ab_channel=AIMakerspace
Hey Wiz, we've talked quite a lot about the black art of chunking in our courses over the past six months or so, haven't we? Yeah, we sure have. Yeah. Yeah. People are always asking questions and they're always saying, hey, so how should I chunk my documents? What's the answer? Is there an answer? There's not really a one size fits all answer to the question, even 2024, Greg. Well, there's this new method I heard about and everybody's been talking about recently that apparently asks the question, but what's the meaning of each chunk? It's called semantic chunking. Have you heard about this? I have. Yeah. Yeah. It looks pretty promising, doesn't it? It looks like it looks pretty good. Yeah. Yeah. Yeah. Yeah. Okay. Well, today we're going to dive in and we're going to see exactly how it stacks up qualitatively and quantitatively versus the gold standard. Sound good? Sounds like a great plan. All right, let's do it, man. Let's jump in today. Welcome everybody. My name's Dr. Greg. That's The Wiz, AKA the LLM Wizard. We are co-founders of AI Makerspace. Thanks for taking the time to join us today. Today we're talking semantic chunking for retrieval augmented generation. It's one of the newest methods. A lot of people are talking about it. A lot of the new frameworks are implementing the method. And we're asking the question today, is it something you should put in your toolbox as you start building, shipping and sharing production LLM applications? Today, we're going to get to the bottom of it, and you'll learn if it is something you should definitely keep in your pocket to pull out as you try to take your RAG systems to the next level. systems to the next level. We'll have Wiz back a couple of times today to dig into some details about specific chunking we're doing on the document we chose today. And of course, we'll be back at the end for a demo. If you have questions along the way, drop them in the Slido link for us. That way they can be upvoted and we can make sure that we get to your question. All right, let's get into it, everybody. Today we're talking semantic chunking. And overall today, what we want to do as we align ourselves towards the session is we want to understand, first of all, the process of semantic chunking. We want to contextualize why we would care about semantic chunking for RAG. And then we want to take a really close look at how the kind of best practice chunking method is different, but also kind of the same in spirit as the semantic chunking method. It's all about meaning after all that we're trying to extract when we deal with words and when we deal with generating answers to our questions based on what's actually contained in our private documents. And then we're going to leverage a tool that you've maybe seen us use before called RAG assessment to actually look at how performance differs quantitatively between the two methods for building a simple RAG system. So first off, we're gonna sort of contextualize this thing with a little bit about chunking. We're going to talk about two methods for text splitting, aka chunking. One is going to be the recursive character text splitter, the one that you probably are grabbing off the shelf if you're building LLM applications today. And then we'll talk about semantic chunking. We're going to assess the method and do a complete walkthrough with the RAG assessment. And finally, we'll answer any questions you guys have as we get to the Q&A. So when we talk about chunking, it's really a simple concept. We're going to break our text into smaller pieces. We can't just feed all of the text at once in despite kind of the proliferation of people asking us about long context windows today. RAG is always going to be something where we're chunking. We're always going to be We're chunking. We're always going to be splitting things up into the right amount. And what is the right amount for any given context is a very hard question in general. But it's important to ask ourselves this for our application because all of this fits into the process of retrieval. to the process of retrieval. We need to go find the relevant information that we have in our documents based on any given question or query we might put to an application we build with an LLM. And retrieval, of course, is the first part of RAG. It puts the R in RAG. It's where we find our reference material. It's where we find our reference material. It's where we get our facts that we're going to leverage to improve our generations after we augment our prompt. And of course, putting the R in RAG is just this idea of retrieving often dense vectors. retrieving often dense vectors. And if we take RAG as sort of two component pieces, we can look at the fact that it's really just this dense vector retrieval piece doing a lot of the heavy lifting. And then we're leveraging the concept of in-context learning by putting this information into the prompt, by augmenting the prompt. So it's important to really focus on this retrieval step. When we ask a question, that question is then going through getting chunked and getting turned into a series of numbers. Those numbers are then, we're looking for that representation in embedding space of our question. And we're looking for stuff that's similar to it. We're looking for stuff that in embedding space is close to our question. And that's where we go to our vector store, which is full of embeddings that represent our data that are also in the same embedding space. And we're looking for distance between our question and the stuff we want to retrieve. differently, whether we're chunking our question or whether we're chunking all of the data in our vector store, it's going to change the results that we get. And so this is why we sort of refer to this as the black art of chunking. It's the thing people often don't talk about because there is no sort of right answer, although there are best practice methods. This allows us to then, after we set up our prompt template, return those chunks in natural language after they've been compared in embedding space as reference material. So this dense vector retrieval is the point. Really, it's the thing that drives the improved generations when you finally feed the prompt into the chat model and get your answer. The second piece is the in-context learning piece. It's really not the point of today's lesson. But overall, this retrieval augmented generation starts with retrieval, and retrieval starts with chunking. In other words, what gets chunked gets retrieved. This is the reality of the situation. And so when we look at retrieval, we need to take a really close look at this vector store. This is where we take our data, we put it in in vector format to a place that we can search for stuff. And the thing about the vector store is that the only thing in there is chunks. the only thing in there is chunks. That's it. So chunking really is fundamental here. And so we want to make sure that when we're chunking, we're doing it in a way that we believe is useful. We believe that's going to produce the best possible results. And we believe aligns with what we're aiming to ultimately use our LLM application for. Improving retrieval can be done through different retriever methods, but it can also be done by improving your chunking. Now, Lewis Carroll famously said in Alice in Wonderland, the best way to explain is to do it. We're going to leverage Lewis Carroll's writing today, and we're going to chunk up Alice in Wonderland and put it into a vector store. We're going to do this a couple different ways. Two different ways. But to make sure that we're clear on why those two different ways matter, let's talk about the chunking methods that everybody's mentioning right now. Number one is the fixed size method. Give me your chunk size and number of characters, give me the overlap between chunks. That's it. Okay. Very basic, very simple. The recursive method is the one we're going to dive into a little bit more detail in just a minute. But if this meme is any indication, what we're doing is we're combining some ideas of fixed size. Now it's worth noting that you can use a chunk overlap and chunk size fixed size approach. You could also do something simple like say why don't you just chunk by sentence or chunk by paragraph. There's sort of a natural indication that that's going to be useful in some ways to us as people who are reading and writing in sentence and paragraph form. There's also document-specific type of chunking. This we'll leave aside for the day, but the great use case here is to think about programming code. Oftentimes, there are natural breaks within programming code that indicate, well, this is kind of a chunk of code right here. And so you can do this with different document types. And we won't get into this. As we get into more and more structured type of language data, there's going to be a lot of different ways to potentially do chunking in the future for more complex documents. Today we're focused on sort of unstructured text, more like a novel that we'll use. And then of course there's semantic chunking and we're gonna dive into this in a minute. This is the purpose of today's event. It's also worth noting that agentic chunking is a thing that's starting to be discussed. Now, the idea here is that after initial chunks are created, then you use an LLM to reason about whether or not you should chunk differently. This is sort of the meta pattern here. The idea of agentic chunking, we want to make sure we're clear on is a pattern. And if you missed our event last week on agents for RAG systems, give that one a watch to get a little more insight into the many, many different ways you might use an agentic chunking approach. This would basically be putting some reasoning into the retrieval step. And there is a specific sort of propositional approach people are talking about today with agentic chunking, but we expect that this will be more and more sophisticated. The methods will grow as the popularity with agents and multi-agent systems grows. But for today, we're going to focus on recursive and semantic chunking. In order to understand semantic and recursive chunking, we have to start with this most basic fixed size chunk. Number of characters, overlap, by sentence, by paragraph, do these things make sense? Well, how about if we kind of combine these ideas into more of an ensemble approach. If you remember classic machine learning, this always worked well. And indeed, it works pretty well when we talk about chunking. Introducing the Swiss army knife of chunking. Shout out to Greg Kamrat for coming up with that analogy. This is really the de facto standard. And what recursive text splitting does is it kind of leverages the fixed size, what's good about the fixed size and overlap chunking, an overlap chunking while also leveraging the natural flow of language in sentence or paragraph form. So this is sort of the recommended generic text splitter. It essentially has a fixed chunk size and a fixed overlap in mind. But then it goes through and it looks at different separators. For instance, the double new line separator is indicative of a new paragraph. A new line separator is often is often indicative of potentially a break in the meaning of what's going on. This could be a new sentence, but there's a lot of different ways that authors can use new lines in their writing. If you think about the many different types of authors you've read and how they can get pretty creative. And then as we sort of move through each of these types of separators, we sort of return and we ask ourself about how close we are to our desired chunk size. And we go down through to the space level and to the character level. And I want to look at an example right here from Alice in Wonderland. This is from the very beginning of the book, page two to three. If we use a recursive character text splitter out of the box, this is one chunk that we might get. And I want to bring the Wiz back up on stage now to discuss this a little bit with us. Because so Wiz, we've got this double new line, new line, spacing and character sort of level of separation we're looking at. But yet there's this recursion, there's this return to this fixed size and overlap. Can you talk a little bit about how you understand this chunk being quote unquote decided upon by this algorithm here? Why would it choose this particular chunk? Yeah, you bet. So the basic idea here is that we lack the double new line in this chunk, right? So you can see that there's a double new line before there, which is above this specific chunk. And, you know, we don't have one in the highlighted text. So we can't split on that, right? We're kind of already out. We're then going to see, okay, well, can we split on a new line character? Yeah, well, we have a couple. But we don't have any that are well sized, let's say, right? We have a very small one, we have room left over. So you'll see that we've split here on our space character because of that. Right. So we split at the end on a double new line since we don't want to go into the rest of that paragraph based on the chunk size that we have so far. Right. And then we are we don't have a new line that gets us close enough. So we're going to split on a, on a space character that you can see, like, because these are generated in order. So from start to finish, we left off the previous chunk at O. So we had to start at deer basically. And then the next closest to our desired size, which is 200, was the double new line that's at the end of this chunk. So after again, period, there's going to be two new line characters. That's basically it. So like this, when you start to document, obviously, you're just going to start at the start. And then you go until you find a double new line or your chunk size. And if you find your chunk size, but you don't find a double new line, you're just gonna start at the start. And then you go until you find a double new line or your chunk size. And if you find your chunk size, but you don't find a double new line, you're gonna look for a new line that's close to your chunk size. And if you can't find a new line that's close to your chunk size, you're gonna look for a space that's close to your chunk size. And we just keep repeating that recursively even, until we get our desired chunk. And so this kind of in this specific instance, right? So we were talking about how we're not going to talk about document specific chunking, but because our text is a book and because it has paragraphs and chapters and sentences. It lines up such that the default recursive text splitter is actually a great document text splitter in this case, because it is following the format that we'd expect. Paragraphs contain single ideas, sentences, smaller ideas, and we want to not break on middle of sentence where possible, that's the idea. I see, I see. So really it's not that different than the fixed size. It's sort of shoring up the edges of the fixed size in a way. It's kind of not doing something fundamentally different. It's still sort of aiming at this fixed size in the end. Yeah, absolutely right. The only small advantage is that we're more likely to break or split on a place where you would expect there to be different information on the double new line, right? If you think about a Wikipedia article, right? The double new line is going to be a new section. A single new line is gonna be a new section. A single new line is gonna be interparagraph. And then a space is intersentence and a character is interword. And we wanna avoid interword where possible because those words are definitely a piece of information. And so we'd really rather not split on the word. We'd really rather split on these bigger pieces of information where we're likely to see a difference in meaning between the two. So that's exactly right. Okay. Yeah. Yeah. Definitely not trying to split words up. That makes a lot of sense. Okay. So, but yet this still leaves much to be desired potentially. But yet this still leaves much to be desired potentially. I mean, intuitively I'm reading this and I'm looking at, there was nothing so very remarkable in that. And I want to include this in the chunk. I mean, something about the nature of the paragraph, like this feels like it's appropriate to the meaning of the chunk, doesn't it? It sure does. I mean, in fact, we split the direct quote. And in this case, right, it's maybe not so important because we're just lopping off, oh dear. But imagine we lopped off a negation at the beginning of a quote, right? Like this is something that's potentially problematic. And with the fact that this naive chunking strategy really has no real dynamic size, right? It's just kind of like, it can get as big as it can get. And it's going to greedily try to get as big as it can get. I think this is, we wind up in a situation where we're definitely you know, we're, we're definitely losing something from this quote, like, and again, in this specific example, it's not super impactful, but it's very easy to imagine a situation where that first word was a very important piece of the of the quote, right? So I think it's, yeah, you're exactly right to say, we would love to have that previous stuff, because we know it's related to this later stuff, right? And so we, you know, we can think to do this in different naive ways, maybe we only ever chunk on new lines, and we never do intersentence. So we let our chunk sizes be quite big, or we don't define a chunk size, right? We just know our documentation. But there are many instances where we can't just know our documentation, but there are many instances where we can't perfectly know our documentation or that's going to take too long or too much processing or et cetera, et cetera. So. That's right. That's right. Yeah. I mean, let's say I asked, you know, Hey, who is late? And I returned this chunk. You could probably, maybe a middle schooler could kind of maybe eek this out, but it'd be tough without saying to hear the rabbit say to itself, you know, it's like, boom, there we are. So, okay, very cool. Then I really like that we're sort of seeing there's the super naive fixed size, and then there's the slightly less naive recursive character text splitter. So that's the big takeaway from here. All right. Thanks, Wiz. We'll have you back in just a little bit. So as we take it to the next level here, let's talk about semantically splitting text. Is there a better way? Can we do it in a way that's more meaningful, let's say? way that's more meaningful, let's say. Well, consider a classic retrieval algorithm improvement choice that people building with Langchain will often make today. This is something that we've used extensively in our courses and that we encourage folks to check out as a entry-level advanced retrieval method. It's called the parent document retriever. And there's two big ideas in the parent document retriever. One is that small documents are good, and the other one is that big documents are also good. Let's focus in on why small documents are good right now. They're good because the embeddings accurately reflect meaning and relevancy. Small is good because the embeddings accurately reflect meaning. This is key because if we can accurately reflect meaning in our embeddings without having to make them small, then we solve the same problem that small docs within the parent document retriever are solving. This is exactly what Semantic Rag is trying to do. In other words, as we look at big documents are good because they retain context within each chunk and small documents are good because embeddings accurately reflect meaning semantic rag is going to accurately reflect meaning and retain context within each chunk at the same time. Semantic rag, one way to think of it is as the best case of a parent document retriever. Now we've got a event in a couple of weeks on advanced retrieval. If you want to see us break down a ton more of the state-of-the-art methods, definitely check it out and join us for that. As we look at chunking semantically, and shout out again to Greg Comrott, he's kind of the guy who came up with this idea, and he came up with this idea based on a tweet from Linus here. Greg actually went and did it. Very cool. And he said the hypothesis is we can use embeddings of individual sentences to make more meaningful chunks. So we're going to chunk by sentence. We're going to look at embeddings of each sentence chunk. When we create chunks, what we'll do is we will, number one, split the document into sentences. What we're going to do is we're going to group sentences as well. So we're going to take sentence one, two, and three. We're going to compare it to sentence four, five, and six. This is the standard in the method. You could imagine doing this with groups of any number of sentences, of course. We ask ourselves, is the block of sentences one through three close in embedding space to the block of sentences four five and six how close how similar is that similarity metric if they are similar right if they are similar we want to combine those into one chunk because if they are similar, we want to combine those into one chunk. Because if they're similar in meaning, we want to use why small docs are good and combine them. They're good because they capture meaning well. So if they're similar, we want to combine them. If they're too different, right? If they seem to mean things that are too different from one another, then we'll split. This is the big idea. Now, if you've got a good intuition on this, you might think, well well we might get some pretty freaking large chunks if we do this and in fact you'd be right because out of the box when we implement semantic rag from langchain we get a massive chunk again we're not chunking here based on fixed size and overlap. This is the beginning of the book, and this is on page six, well, at least of the edition we grabbed offline. it goes all the way from the beginning to page six, using this embedding similarity chunking approach. So I want to invite Wiz back up here again to talk a little bit about how he's thinking about how this thing captured six pages in one chunk. Is that right? Is that what we got here, Was? How did that happen? I mean, yeah, it kind of makes sense, right? Like, the chapters are ultimately a large semantic chunk of information. That, you know, likely, what we're gonna think of as a single semantic related portion of a book right if you if you asked someone you know hey how would you best group chunks of meaning in a book they're probably gonna say oh chapters is a good start right usually it's a coherent idea now there's there could be some difference. And in this case, there is some difference, right? So we see like a distinct shift between the two chapter or the two sections of this chapter, one, which is before Alice starts to fall down the rabbit hole, and the second, after which she starts to fall down the rabbit hole. And, you know know i think this is why we have this break you know to try to guess at the uh the reason but the idea is that the first part of the chapter is basically just about alice wandering around bumping into the rabbit right and uh all of that information is relevant to to this chapter and so that's why we get this, this split here. And that's why we're capturing so much information, right? The idea, especially when we talk about long context window and everything above, blah, blah, blah, blah, blah. When we talk about these things, we are we are left in a place where we're thinking about, well, if this is all related to the first part of this chapter, if all this is interconnected, right, if this whole first bit is about Alice to the first part of this chapter, if all this is interconnected, right? If this whole first bit is about Alice meeting the rabbit and then falling down this rabbit hole, then we should group it together, right? We should provide that as context. We're most likely to include relevant pieces of information if we do that. As well, you can think of this as an analog to parent document retriever, as you suggested, except instead of just relying on locality, right? So we know that pieces of information that are near each other are probably related to each other. Instead of just relying on that concept, we're also going to leverage, well, we should think about if they contain meaning to each other, right? It's like in a textbook, if you're talking about one concept, and then you pivot to the next concept, yes, the locality would would would would lead you to believe that these things are related, but they're they're different in meaning, right? The first bit might be about a certain, you know, a certain situation, and the second bit might be about another situation. And so, you know, this lets us build these larger chunks, without compromising the meaning of that of that chunk, or of those individual chunks. The other way to think about it that I like to think about it is whenever we embed something into this space, we, you can think about it that I like to think about it is whenever we embed something into this space, you can think of it as literally a point in 3D space or n-dimensional space, right? And what we're looking at is every time we add a new set of documents, the point's kind of in the same place, right? And that's true until we shift it past a certain threshold, right? Which is what we're going to talk about a little bit in the code. And if we move it past that threshold, we're saying, okay, there's a difference now. But as long as we keep kind of pointing to the same place in that space, I mean, all that information's, you know, wonderfully relevant. And it will only help us to understand the context. And so that's how I think about it. Yeah, I really like this textbook idea or sort of the technical report idea. It seems like in those cases, there'll be much more natural shifts in context that you might not find in a novel. I mean, fundamentally, right? a novel is kind of all the way connected on some level. And so, you know, to sort of do subchapter even is tough. I mean, if in many novels, there's subchapter headings, and that's the natural break point, right? And so, you know, we could, we could, you know, sit here and get very, very particular about, well, if it was up the rabbit hole and down the rabbit hole, as you say, well, why did we get down rabbit hole and down the rabbit hole as you say well why did we get down down down in the first you know and and there's all sorts of really nuanced things when we try to be data centric here but this is a very useful way to to think about especially in specific types of documents i I think. Love the textbook idea, love the sort of technical reporting idea. Places where you sort of switch context more than a novel, this might be incredibly useful. And, you know, that to me feels like a potentially big, big win for this method, although I don't know. And we'll see what the numbers tell us here shortly, won't we? We sure will. Yeah, yeah. All right. Well, let's go ahead and introduce this build. We'll have you back in just a little bit, Wiz. What we're going to do is we're going to now see if we can compare these quantitatively everybody likes numbers it's hard to compare these things qualitatively for any given type of document in general as I hope we've sort of demonstrated with this simple novel what we're going to do is we're going to load the document. We're going to chunk using two different methods, the ones that we've talked about, the recursive character text splitting and the semantic chunking. We're going to, in the notebook, see if we can look at an example or two from Alice in Wonderland. Is it clear which one's better? Well, we kind of talked about the primary example of the session. I think it probably makes a little more sense that we kind of split the chapter in half, maybe up the rabbit hole, down the rabbit hole kind of thing, versus being more rigidly attached to a 200-character chunk approximately, but one that's looking for new lines or spaces between words. And then by building out a simple laying chain rag system, we're going to be able to quantitatively wrap that in the rag assessment framework and decide what the metrics we're getting are telling us. Of course, we're using Lewis Carroll's Alice in Wonderland to do our chunking. We're using OpenAI's models here. And we are using the RAG assessment framework, which we've talked about in a recent event on the Art of RAG evaluation. If you want to go into details on the latest from the guys over at RAGIS, shout out to them and their recent Y Combinator appearance, then check out this event to dig into all the details of the RAGIS framework. But for now, you'll understand kind of what's going on at a high level as we see the numbers go up or down as we assess semantic chunking of rag quantitatively in the notebook Wiz sending it over to you oh yeah okay so we have uh pretty straightforwardly you know the the way we begin most of these builds we're gonna get some dependencies uh we're going to get some dependencies. We're going to grab lane chain experimental, which is new. That's because this is an experimental feature still being worked on. Not everything's ironed out. Some of the strategies are still a little bit naive and this needs to be, you know, built upon or, you know, expanded on in order to reach like the full, quote unquote, the full power, right? So we're going to keep that in mind as we go through the notebook. We're also going to grab OpenAI since we'll be leveraging those models through both our RAG LCL chain as well as our RAG assessment. We're going to just grab Langchain Core and RAGUS. We're also going to grab Face CPU and Tiktoken. Face CPU is just what we're going to use to build our local embeddings store, so our vector store. We're just going to use Face to power that. Next, we're going to grab Alice in Wonderland. We just grab it from Gutenberg. This is a copyright-free book, so it works for the demo today. We're going to read it into memory, and that's it. We're just going to leave it there. You can use a document retriever from Lionchain, but there's not a significant difference here. So we're going to first just chunk these naively to see what the chunks look like, and we're going to see that we get our recursive character text splitter. We're going to have a chunk size of 200. No overlap. We are going to use the Python len function which means we are not using token length here. We are just using characters. And then we are going to keep the is separator equal to false. This is just a parameter that you should set. We are then going to split our documents and look at some of the chunks. We can see, right, this is exactly what Greg outlined to us. We have a number of small chunks, and these chunks are broken apart kind of just, you know, in the middle of sentences. They're broken apart, you know, maybe inconveniently. Again, this chunk size was picked purely to illustrate this point. This is still true at a larger chunk size. It's just going to be less frequent depending on how many documents you have. But the idea is pretty straightforwardly, right? These chunks don't contain a lot of information in each chunk, and it's disjointed from potentially relevant information nearby. So how do we implement semantic chunking? Well, first we're gonna need your OpenAI API key. You can use a local embeddings model for this, like any of the Hugging Face models, you use Cohere's embeddings, whatever embedding model you want, we just do need something to embed our chunks. So we're gonna look at semantic chunker a little bit more technically here. So we kind of got the intuition of how it works and why we care about why it works or how it works. But we're gonna talk a little bit more about the actual implementation. So the implementation is very straightforward. First of all, we're gonna have a number of different thresholds uh these descriptions are straight from the lane chain dots uh you can see here uh I just you know nothing explains it better than the actual documentation so there you go but the idea is you know when we think about when to actually split right right? When to make a gap in chunks, we want to do it because there's some kind of difference. So the first thing that we can think about is a percentile difference, right? So once we get all of our differences in between our sentences, we can see, you know, if there's an X or greater percentile split, that's when we make a break in the chunk, right? We can also use the standard deviation. So if the chunks are greater than a certain standard deviation away from each other, then we should not incorporate those together. And lastly, we can use interquartile distance between the chunks to determine when they should be split. These actually have a significant impact on how your document is going to be chunked. So I would experiment with all of them. In this case, we're just going to leave it as a default, which is percentile. So if we have, you know, these greater than a certain percentile difference between two chunks, we will not combine them together. And the way that this actually works in the code behind the scenes is we first split our entire set of documents into sentences. We're going to do this very naively, just based on period, question mark, exclamation mark, right? So we're not doing like spacey sentence tokenization or anything like this. We certainly could. And maybe that would lead to different results. Maybe that would be better. Certainly be more compute expensive, though. So we're just going to naively, you know, break on sentences as we understand them based on these punctuation characters. We're going to index each sentence based on its position in the document. So the first sentence is the zeroth sentence. And the last sentence is the number of sentences sentence, right? So pretty straightforward stuff. We're just going to index them based on their position. Then we're going to do something that I think is integral to this strategy. We're going to combine sentences without thinking. No embeddings required. We're going to combine groupings of sentences. By default, we're going to use this buffer size equal to one, which means that the sentence that's distance one on either side of our target sentence is going to be included in a group by default. So as Greg expressed, right, sentences one, two, 3 are considered a single unit. Sentence 4, 5, 6 is considered a single unit in this specific example. 0, 1, 2, and 3, 4, 5, if we want to get very comp-sci about it. But this buffer size is something you can play with. The basic idea here is that we expect that the sentence by default will be related to the sentence before and after it. This is not strictly true, doesn't have to be true, but it's a good thing to think about. And it's a good place to start because we do want to make sure that we're capturing some, you know, combined representation of these sentences, right? We don't want just each sentence because each sentence might be very different from each other, but still related to the whole subject matter. So we're going to seed, seed the embedding with this, this kind of three sentence idea. Then we're going to calculate distances between groups of sentences. We just look at their embeddings, and then we calculate the cosine distance between them and that's it. Now we have a bunch of groups of sentences and we have the cosine distance between each of those groups. Then we're going to merge the groups based on the similarity, based on the chosen threshold, right? So this is where we're making the decision. If sentence group A is X percentile different than sentence group B, we will not merge those together, and so on and so forth. And so that's the idea. Again, this is experimental. It's still being worked on. We still have a lot of space to explore here. There's not a ton of research that's concrete on this. So, you know, expect that this will change and become more and more performant over time. The actual implementation is quite straightforward, right? We call a semantic chunker, we provided embedding model, and we choose the breakpoint. If you don't choose the breakpoint, it will default to percentile, but you can choose it. So we show that here. You'll notice that we're gonna use text embedding three large. This is intuition based only, but my assumption is the better our embedding model is, the more accurately we're going to be able to chunk things based on that embedding. So I went with a large embedding model here, though you could choose a smaller one. So you can choose a tech embedding three small. It's totally up to you. I just went with the intuition that a better embedding model will lead to better chunks. And I have no concrete you know there's no research that says it's true it seems kind of likely so that's why we did that uh and then you can see that we get this chunk exactly as we saw from greg uh you know we it's quite long it's it's in fact 4 000 characters right so that's uh you know 4 000 characters is quite a lot each of our other chunks only has 200 characters. And so if we have, if we do the math, right, you know, we're going to see that we have the equivalent of like 20-ish chunks. And that's, you know, that's something that we'll keep in mind going forward. So now we can create a RAG pipeline. We're going to use TextEmb 3 large for our vector stores embedding model. There's no research that indicates, you know, that we should use that. But the, again, the intuition would suggest we should use the same embedding model for comparing our query vector to what we have in our vector store. That we did to create the chunks, right? That's going to, that will communicate the best semantic consistency between the two different processes. So that's why we went with that. But again, you could use anything. I mean, it will work if you use anything, but we chose this with the intuition that, you know, because we created the chunks and the chunks are related to each other semantically according to this embedding model, then we should use the same process to compare them. And then we are going to just use one chunk here, right? Because they're big chunks. And so it makes sense. We'll just use one semantic chunk. And then we can ask questions, like who has a pocket watch? And we get some kind of response. Then we're going to create our, uh, uh, augmentation prompt. Then we create our model and then we create our chain. This is all LCL stuff. Uh, we got a ton of, uh, a ton of, uh, you know, previous material on this. If you're, if you're interested in exactly how this is working, but, uh, for now we just create the chain and then we can ask questions. How does Alice find herself falling down the rabbit hole into Wonderland? And we get this awesome response. It's quite fully fleshed out, pretty cool. And then of course we ask about Dina and we get Dina's Alice's cat and she is important to Alice because she is a great mouse catcher. That's pretty cool. Then we can go to the other strategy, which is naive. We're going to use the same embeddings model just to make it fair. And then we're going to use 15 chunks of retrieved context. The idea here is we want to level the playing field a little bit, right? So where we have these one big chunk, we're going to use 15 smaller chunks uh in order to give this a chance right and the idea is that this should make it a more fair comparison as a token count should be relatively consistent between the two methods uh then we're gonna ask the same question and we get a slightly less robust answer uh and we get a an answer about Dina, but it focuses more just that she's great. And it doesn't focus on why she's great and why she's great because she's a good mouser. So, you know, we lose that information. Okay. So we, so like, I mean, just looking at it, you can kind of say, okay, you know, maybe this is actually preferential. These responses seem a little bit more relevant to the answers. Okay. Let's see if we can actually put that into numbers. So we're going to go ahead and we're going to create our ragas assessment comparison data set. We're going to chunk the source document in a different chunk size, just so our naive rag can't just cheat, right? It can just cheat if we use the same size. So let's not let it cheat um so we're gonna create it in chunks of 400 you can choose any number here uh it's uh just just as long as it's meaningfully different from your uh your other strategies right your other pipelines uh because we don't want it to be able to cheat then we we're gonna create questions, which are synthetically generated by GBD 3.5 Turbo. This is for each chunk in our data, right? We're gonna use those contexts that we created above. We're gonna create ground truths, which will be generated by GBD 4 Turbo. And then we're gonna have our rag chain answer the question based on the question and context, though it will retrieve its own context as that's the point. So we have this here, question prompt, you're a teacher preparing for a test, please create a question and using the following context and question to answer the question, only the providing context. There you go. And we'll create these two chains and we'll run this through. You can also use Ragas' built-in synthetic data generator. You know, it is a wonderful tool. We're just showing kind of what's happening under the hood here. And then we will create our data set. And remember, the question is generated by GBT 3.5 Turbo. The answer comes from our semantic rag pipeline. The context comes from the ground truth context and the ground truth answer comes from GBT4. Then we're going to evaluate based on answer relevancy, faithfulness, context recall and context precision. Just the defaults from Ragas's documentation. And we'll get some results. And I mean, we see the result, context precision is high, faithfulness. We have like a hit or miss. We get some zeros and some nans here. So not a numbers. Our answer relevancy is pretty decent at 0.73. And our context recall is quite high, which makes exact sense. So that's great. Now let's look at the naive strategy, right? So same thing, but the naive strategy, and we can see that they're a little bit different. And I'll let you guys go through the numbers in detail in notebook if you wish. But we can see that the actual naive result is pretty much the same right for everything except for answer relevancy and our answer relevancy is notably worse with the naive strategy which kind of makes sense right like it has less of that relevant information and so it's able to answer the questions less fully and uh and there you go that's the that's the basic idea. And we see the semantic chunking strategy seems to have better performance on answer relevancy without giving anything else up. And so it seems like a good idea to pursue it with perhaps some modification and perhaps more testing on which thresholds work best. But the idea is, if you're to ask me, is semantic chunking better than naive chunking? I would say to you, seems so, yes. That's what the assessment seems to indicate. So with that, I will push you guys back to Greg, who will take us to our Q&A. All right. Thanks, Wiz. So does what it says on the tin. All right. Well, excellent. That was semantic chunking of rag. And in case you didn't quite catch those metrics, just briefly, context precision is asking how relevant is the context to the question. Context recall is asking, is the retriever able to retrieve all of the relevant context? And we got similar performance on both of these. Faithfulness is asking, is the answer fact checkable or is it a hallucination? Again, similar performance here. But the difference was on answer relevancy, which asks how relevant is the answer to the question? Interesting. The semantic chunking process of splitting, then indexing on position before grouping, calculating distances between those groups, and then deciding whether or not to merge based on similarity thresholds. This is something that we'll continue to see evolve. And it does appear that it produces good results in some cases. As we think about this particular key result of the day, the answer relevancy improvement on semantic chunking versus recursive text splitting, this is a generation metric. And again, it asks how relevant is the answer to the question? And again, it asks how relevant is the answer to the question. Importantly, this does not consider factuality, but instead penalizes cases where the answer lacks completeness or contains redundant details. Specifically to calculate this score, an LLM is prompted to generate an appropriate question for the generated answer many times. And then a mean cosine similarity between generated answers and original question is measured. So that overall, if we can generate an answer accurately addressing the initial question, Overall, if we can generate an answer accurately, addressing the initial question, the LLM should be able to generate questions from the answer. Little Uno reverse card metric here. And we see the improvement. So that's pretty cool. More relevant answers with semantic chunking. And that's kind of the takeaway for the day. Remember, what gets chunked gets retrieved. So if you're doing RAG, you should think about chunking. You can be naive, and that's fine, but maybe you can take it to the next level. And maybe you should keep this in your toolbox and watch the emerging techniques, especially the thresholds at which we do semantic chunking. It was quantitatively better and perhaps it was even qualitatively better. Perhaps it could be even a little bit qualitatively better on different types of documents that are suited for this. And we've got a couple questions I just want to ask. Welcome back up to the the stage Wiz um you kind of mentioned this before but I do believe that it's really kind of worth tripling down on does it matter which embedding model we use when we're going and we're thinking about chunking or should we always just kind of you mentioned bigger and better model likely going to be more useful in this case. That's sort of your intuition. Should we be playing much with the embedding model as we're playing with semantic chunking? Yeah, I mean, so I feel like the answer should be yes to this question. We'll have to wait for more kind of results and research for for that to be upgraded to like a yes definitely that's true uh you know i'm not just saying it because it feels right um but it does feel right right so the better the better chunking we have uh in this case is related to how well we can capture semantic information about our about our sentence groups so the better we can represent those sentence groups the better chunking we should have right the smaller differences we should be able to capture and so i think that's uh i believe the embedding model should matter. Similar to how chunk size matters in, you know, naive chunking, right? Or the characters you split on matters. I think it's the same for the embedding model and its performance here. Because we're actually using it to do the chunking. That's correct. That makes sense. So it's literally directly dependent on it. Okay. So I want to go through a couple of questions here briefly. One super common, here we go. Are there better alternatives than cosine distance? What's up with this cosine similarity? Can we get better than this? Man, that's a tough one. I mean, the answer is yes. But there are everything, everything that might be better is much more compute intensive. And in that's the kind of the game we have to play with these distance measures is we have to do a lot of them, right? So we have 500, you know, sentence groups. We have to compare each of those, right? That's a lot of calculation that needs to happen. And so for it to be very efficient and performant is very important. And I don't know that we have something that's drastically better than cosine similarity that would work well in that slot without blowing up compute costs. That being said, it does kind of depend on your data and you could use some of the other simple distance measures if they work better for your data. But realistically, we build everything with this idea of cosine similarity into it. And so we want to keep using it. Yeah. And if you want to do a sensitivity analysis on all the different metrics have at it, I think there's a lot of people working on their PhDs doing this kind of thing right now. All right. How compute intensive, speaking of compute intensive stuff, is this strategy in general? Would it be appropriate in the scenario where we have to constantly update and re-index stuff or not really? Okay. So that's an interesting question, right? If you have changes in documents that you're re-indexing, then this is going to definitely be not the option for you. If you're adding new documents, I mean, it doesn't matter, right? We're only doing this in each document, right? Whenever we add new documents, we have to do this process once, but then it's done. And so if you're working with dynamic documents that are themselves dynamic, so they change, this might not be a great strategy in terms of cost. If you're talking about documents that are added, so you have dynamic documents, but the dynamic part is that you add or remove documents from the pool, that's a great strategy. I mean, it is always going to be more expensive than a naive strategy because we're having to use some embedding model, even if we run it locally, even if we run it on CPU, it's going to take more clocks to get it done than you would have if you were just checking based on which character was being split on. So yeah. Okay. All right. Cool. cool. So, you know, we talked similarity, distance measure. We talked embedding model. The last thing, we get this question all the time. We'll end on this. What chunk overlap should I use, Liz? Like, what's the answer? What chunk overlap is the answer? You should use the one that's best for your documents. You should use the one that's best for your documents. And you should determine that through tools like RAGIS or any other evaluation framework. You know, chunk overlap is a hyperparameter. We should do hyperparameter tuning of some kind. You know, in this case, maybe not literally hyperparameter tuning, but an analog. We should empirically determine the chunk overlap as we should the chunk size by running some kind of search and choosing the one that makes the numbers the highest. And there you have it, everybody. Wiz, I believe we have one more ask of everybody today, don't we? Yes, that's right. I forgot to say it. Please, it helps us a lot if you subscribe to us on YouTube, as well as click the little bell notification to get notified of our future events. We do one every Wednesday at this time. You can always find us here. So join us for your lunch break if you're EST. Otherwise, you know, thanks so much for tuning in today. Thanks, Wiz. Awesome, man. And it's time to close it out, everybody. If you like this session, definitely ring that bell. But also consider joining our Discord. We've got great vibes and we'd love to have you. It's growing rapidly and there's always some interesting conversations going on every day at this point. If you're looking to learn more for free and get started with production LLM applications, check out our open source course, the one we taught last August on LLM Ops cohort one. You can find that on GitHub and we've got a number of YouTube videos associated with it. Of course, if you're ready to accelerate all the way out at the edge, we are happy to announce that we've got our upcoming cohort two of our AI engineering bootcamp that kicks off next Tuesday, April 2nd, and it's going to be the best yet. We've been rapidly iterating in cohort one, which wraps up this week. Check out LinkedIn for the demo day event that you could join us live for on Thursday and check out what people have been building, shipping, and sharing. And finally, if you have any feedback for us, we'd love to have it. Let us know what you'd like to see next. Let us know how you thought these events have gone recently and what you're paying attention to out there from chunking to retrieval methods to agents to multi-agents and beyond. And as always, everybody, until next time, keep building, shipping, and sharing. We'll continue to do exactly the same. See you back next Wednesday or maybe tomorrow night for AIE1 Demo Day. Cheers, all. Have a great week.
Semantic Chunking for RAG
3,795
AI Makerspace
20240328
In this event, we’ll learn how the semantic chunking algorithm works! Text is split into sentences that are converted vectors through an embedding model. Similarity is measured between each pair of consecutive sentences. If sentences are too similar, as defined by a threshold, additional chunks are created. We can ensure that if any two consecutive sentences are too different from one another, additional chunks can be created. In theory, this will allow us to achieve better results during retrieval within our RAG system. Event page: https://lu.ma/chunkingrag Have a question for a speaker? Drop them here: https://app.sli.do/event/eQnuJrAp9sN3RrhMUvMfMS Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/1UxkU7LbfV14f77p7
2024-06-10T02:06:06.423209
https://www.youtube.com/live/SEA3eJrDc-k
how they work and what it means to be agentic. Yeah, that's absolutely true, Craig. Absolutely true. And so like, is there one standard way to build an agent right now? No, not really at all, no. Has there ever been? No, not really. Has the way that we build agents evolved significantly in the field since even just last year? I would say even the way we think about agents has evolved. So yes, definitely the way we build them. That sounds like we've got a lot. Are you ready to tackle this big subject? As ready as you can be, yes. Yes, for sure, man. All right. I'm going to get right into it. We'll have you back in a bit for a quick discussion before we start our demos of the day. Welcome everybody. I'm Greg. That is Chris, AKA The Wiz, and we are co-founders of AI Makerspace. Thanks for taking the time to join us again this week. Today we talk agentic RAG. What you'll learn from today's event is a little bit of the context of where we were last year with agents and RAG and where we are this year. Hopefully we're going to clarify a lot of foundational concepts for you before we do quite a large build today that's going to cover a bit of the classic rag and agentic thinking and a bit of the new school rag and agentic thinking. So let's go ahead and get started. If you have questions along the way, do please ask them in the Slido or go ahead and slap them in the YouTube live chat during the Q&A. So today we cover agentic rag. We need to break down exactly what this means, what it doesn't mean. And along the way, we will have a couple of key things that we are aiming at. Number one, we want to just cover agent terminology and architecture considerations as we get into this. We're going to cover core rag and agent constructs within the lang chain ecosystem. And we're going to cover how to build an agent that contains both chains and graphs today. So this is going to be kind of a big one. We're going to talk agentic, rag, and function calling first. We're going to talk chains, graphs, and the infrastructure that it takes to build agents. Next, we're going to do a couple of demos today. One little classic chain rag, and we're going to then build a reasoning engine around a chained classic rag setup, plus some additional tools that we can leverage through function calling. All right, so we need to break down all these terms, and we need to make sure that we're aligned on what exactly all of this is really talking about at the end of the day. First of all, one of the things that people will ask us as we teach this subject is like, when we use these words, agent, agent-like, agentic, are these the same thing? Yes. They're the same thing. Okay. Don't worry. There's no real super necessary to argue subtle nuances here. The other thing we want to understand about agents is where they fit in exactly within our applications, even within a simple RAG application? And the answer is, well, wherever you want them to. Because agents you see are a pattern. They're not a specific thing. You can apply the agentic or agent-like pattern in lots of ways. What is this pattern? Well, fundamentally, when we talk about our field in LLMs, it's the reasoning action pattern. That's all it is, okay? And the reasoning action pattern is the old two-step loop. In practice, we call an LLM to do some reasoning. It helps us determine which action to take next or which response to give to the user. So the flow is like user asks question, LLM decides. Do I know the answer off the bat? Maybe I'll go and pick up a tool. Maybe it'll be a custom rag build. Maybe it'll just be an API call to something like Wikipedia or to Google search. I will then consider what I'm getting out of that tool and I'll decide, okay, do I have the final response yet? If not, maybe I'll go through and pick up another tool. Maybe I'll break down the question in a slightly different way. There's lots of things that we can potentially do to keep looping through this to find the best possible answer. And this sort of reasoning step and this action step is kind of the fundamental two components that we want to focus in on as we think about the pattern behavior. And this high level view is unchanged for non technologists and non engineers when they think about what could be built. Now, what happens behind this on the back end is the part that we'll get into today that has been evolving in a very exciting and interesting way. All right, let's continue to contextualize here today because we talked agents, let's talk about RAG. We're talking agentic RAG today. RAG very simply is all about avoiding hallucinations through the retrieval of reference material we find references we augment the prompt with references we improve our answer with those references rag is simply doing two things. It's doing dense vector retrieval and it's leveraging the idea of in-context learning. When we ask a question to a RAG system, that question flows through an embedding model after it's tokenized and chunked up. Those embeddings are then compared to any data that we have, that we've also tokenized, chunked, and embedded and saved in our vector database. We're doing a comparison here between the string in our question and any data in our database. We also set up a prompt template. It generally says something like, use the provided context to answer the user's query. You may not answer the user's query unless there is specific context in the following text. If you do not know the answer, importantly, or cannot answer, please please respond with I don't know. Into this prompt goes the similar reference material to the question although now it's back in natural language form. This process is called dense vector retrieval and the idea of putting all of this additional context into the prompt this is where the augmentation comes in this is where in context learning comes in so this is rag and all this happens before we put our prompt into the llm and get our output into the LLM and get our output. So this is the process of retrieval augmented generation. Now, the thing about RAG is the same thing as agents. RAG is a pattern. It's not a specific thing. Okay. So if we consider agentic rag, we can say then that agentic rag too is simply a pattern. It's not a specific thing. Consider using an agent within a rag system. If we say agentic rag, most of us think of taking the rag system, compressing it down into a single system diagram box, and then perhaps our agent sits atop that rag system. That's probably how most of us think about a gentic rag out of the box, but it doesn't have to be because we can do a gentic retrieval and the retrieval process is where we use an agent. We can do a gentic augmentation. And of course, we could also do agentic generation. I want to invite Wiz back up to the stage here for quick discussion. When we think about this idea of agents sitting atop RAG or within RAG, can you provide us with a few examples of the ones that we might use or consider using sitting within the rag system, within retrieval, within augmentation, within generation? How should we be thinking about this? Yeah, I mean, it basically comes down to behavior or where do you want that complex reasoning behavior? You know, so I think for retrieval, it makes them an intelligently retrieved context. It seems like a powerful thing to be able to have for augmentation, right? The way that we're augmenting and how we kind of post process that those retrieved documents and how we kind of, you know, interact with them or, you know, rank them or whatever you want to call the process, right? We can leverage, you know, this actual agentic behavior and then generation as well, right? So when we're generating, not just generating one response, but doing some reasoning with our augmented prompts. And then, and finally providing a more comprehensive response, right? They're closely linked and you have to kind of quibble about it to really to get down to it, but you can fit agents into any of these steps and it'll, it'll be, you know, better assumed than if you didn't. Okay, okay. So if we think about agents as sort of this reasoning action framework, right? We're sort of then either leveraging the reasoning and action at retrieval or augmentation or generation or above. Is that correct to say? Yeah. I think that's a great way to explain it. Okay. Okay. So if there's any sort of reasoning that you have to do at any of these steps, you could potentially leverage an agent. Is that fair to say? Absolutely true. Yes. Interesting. All right. Thanks for your insights, man. We'll have you back in just a little bit for the first demo so you can see agents and agentic behavior are not the most straightforward thing in the world let's add an additional piece to our story here because when we build with agents we inevitably end up crossing the path of function calling now function calling is all about calling. Now, function calling is all about connecting large language models to external tools. For our agentic RAG build today, what we're going to do is we're going to describe our tools and we're going to have the LLM choose which to pick up to answer the query. Specifically for us, we're going to use our tools as backup to a primary RAG system. So we'll decide if we need to go to backup and that's what our agent, our reasoning engine that's going to leverage function calling is going to do. engine that's going to leverage function calling is going to do. Okay. What are we going to build today exactly? Well, today's agentic rag build is going to have three tools in it. We're going to create a classic chained rag application with sort of private data. You want to consider this. And then we're going to have this backup search capability through function calling for both archive papers and also directly on DuckDuckGo. And even within just having access to these three different things, there's a lot of configurations of how you might build an agentic tool like this. We have to outline exactly the way we've built it, but let's keep it general for now. Our agentic reasoning action loop, it's going to be helpful to think about like this. We ask a question. That question is then going to eventually have access to each of these three tools, the private data, the archive, and the DuckDuckGo capabilities. What we'll do is we'll set up a prompt template and it'll say something like, "'Have a conversation with a human "'answering the following questions as best you can. "'You have access to the following tools. "'Tool one, RAG private data has access to my private data. "'Tool two, archive is good "'at finding relevant research papers. Tool three, DuckDuckGo, is used for general internet search. Now, if we had specific private data, we would probably want to provide some specific details here, but this should give us an idea of how this is working in general. Now, what we do is, as we say begin, what happens is we set up an agent scratch pad this is sort of keeping track of the state of affairs in our agentic reasoning loop and we of course need to connect this to a large language model. Generally, we're using an OpenAI LLM for this. And we want to make sure that this loop can be gone through. As many times as necessary, this reasoning action loop can have access to these tools. We can continue to think through things and write them down on this sort of agent scratch pad. And we can decide eventually to output a final response so this is kind of the big picture of what we want to kind of connect together today interestingly when it comes to building these systems, there are nuances that are important to keep track of. And this gets us into our portion on chains, graphs, and infrastructure. As I mentioned earlier, if I'm a user, nothing has changed in terms of how I engage with this system. I ask questions, I get a response. It's a chatbot. If I'm an LLM, if I'm the OpenAI LLM, nothing has changed. I get a prompt, I deliver responses, I have access to tools, same, same. However, if I'm a builder like you are if i'm a product manager if i'm an engineer the infrastructure that we have now to build agents is actually more useful than it ever has been what do we mean by useful well it's more intuitive it's more modular and it's more intuitive, it's more modular, and it's generally more capable. And Langchain allows us to build sort of from the ground up with production in mind, which is very cool. What do we mean by intuitive and modular? Well, intuitively, what we can do now Well, intuitively, what we can do now is if we can draw a diagram, let's say your product manager or director of product wants to draw a diagram. You can actually directly build that diagram using Langchain's ecosystem, specifically using Langgraph. This is super cool. Now as an engineer, you might think, well, I can always figure out how to build what's in the diagram. But this is sort of a beautiful kind of one-to-one of taking the diagram as outlined and sketched and putting it directly into code. This is a very powerful pattern, this graph pattern. It's also more modular. And it's more modular because of this graph approach. Now, recall exactly what we're talking about when we say graph. A graph is simply a collection of connected objects. You might think of this as a directed acyclic graph, a DAG. But what's happening here is we actually are able to do cyclical things in the graphs that we can build with Lang graph. So it's a little bit more than a DAG. It's a cyclic DAG. We still have nodes. We still have edges. And what we can do in Langchain is we can actually create cyclical graphs. This is useful because this allows us to add cycles to applications built on Langchain. Of course, Langchain started with the abstraction of chains. Langgraph is what allows us to do this. And specifically what Langgraph is doing is it's creating state machines that sort of track the state at each node by specifying them as graphs. Now you might think, ooh, state machines. I'm not exactly sure what you mean by that. Harrison Chase, the CEO of Langchain, gave a great TED Talk describing the cognitive architectures. State machines sit somewhere in between routers and agents. In fact, we did a deep dive on state machines in our recent event on Landgraf. So we're going to kind of avoid going into super depth there at the moment. The important thing to note for today's build is when we talk about nodes and edges in Lang graph, we want to think of the nodes as functions like external tools or as Lang chain expression language runnables. For instance, a classic rag chain. The edges are instead paths to take, right? Or where to pass our state object next. Remember the agent scratch pad? Well, what that is really tracking is it's tracking the state. It's tracking the state. Okay. So let's dive a little bit deeper into the tactics of Langchain. Langchain just came out with v0.1. This was a big release. It included Langraph. Remember, Langchain is all about two things, enabling LLM applications to leverage context and reasoning. Context, we can think of RAG, context augmentation. Reasoning, we can think of agents. These are both patterns, right? When we think of our classic RAG chain, the tools that we're using in Langchain within Langchain Core, Langchain Community, are, of course, L LCL that underlies everything. But we're going to use the model and the prompt in our build today. We could use the output parser or the example selector. But we're just going to stick with the model IO and the prompt templating. We're also going to leverage these retrieval tools, the retriever, the loader, the vector store, text splitting, and so on. And we'll use chains to chain them together. Declarative composable chains are very easy in LCEL, and this is very straightforward to build. This is sort of classic RAG at this point. Now, when we talk about using Lang graph to do retrieval on external tools like archive and DuckDuckGo, this is still a rag pattern. But now what we're doing instead is we're leveraging agent tooling and agents. Specifically in our build today, we are going to do this aspect of our application that uses agents and tools using OpenAI function calling. Okay? So we've got classic rag chain, the rag pattern that is leveraging the function calling capability to get access to backup archive or DuckDuckGo search. So if we dig into our classic RAG build, this is pretty straightforward for anybody that's been building with Langchain. This is our little box called private data. And what we're going to do is we're going to leverage models, prompts, and retrieval stuff. The models input output is classic, right? If we use an OpenAI model, we need to sort of leverage the system user assistant message syntax. In LanqChain, it's the system human AI message syntax using chat style model. No problem. The prompt template, well, we've already kind of talked about the prompt templating that we need for a classic rag system in our diagram earlier. And then we need to create a simple vector store. This vector store is going to just load some documents in, split the text up, do some embedding, and store the vector. There's all the little tools we need right there. So this rag setup looks something like this. If we use, say, a face vector database, we're going to chain everything together, and we're going to load some documents in, split the text up, and so on. What we're going to do today is we're going to imagine a situation where we built a vector store, let's say in 2023 or a little while ago, and it's not quite updated yet. So we want to give it access to backup live up-to-date search capabilities with both archive and DuckDuckGo. What we're going to do is we're going to do something that if you've been following our channel or our content for a while, you've probably seen us do before. We like to call this meta-rag, and we've been doing this for the past year or so, where we'll search archive for the top papers on rag, say top three, top five papers. We'll turn them to embeddings, we'll put them in a vector store, and then we'll be able to query our RAG application about RAG. Then we return answers to the questions with sources. The way we can set this up is we can use Langchain's archive loader to get papers from archive based on our question and build out this vector store imagine that we were going to set this up and then in the future we wanted to augment it And then in the future, we wanted to augment it. But this first step is just setting it up. We're going to take off-the-shelf models, Text3 Embedding Small, the latest embedding model from OpenAI, the chat model, GPT 3.5 Turbo, and we're going to build out this simple meta-rag system classically. This is what Chris is going to walk us through now. And then after we get our meta rag system set up, we'll be able to rock and roll right into our agentic rag rest of the day. Wiz, over to you. Yes. Thank you, Greg. Okay. So we're going to talk very briefly about how we set up this L-C-E-L chain, and then we'll be moving on from there. So the idea here is that we're going to be using a number of different components, the first of which is going to be this L-C-E-L chain, which we're kind of uh you know used to uh to using hopefully at this point and then also we're gonna touch on that laying graph uh piece so the dependencies are pretty straightforward right we need lang chain lane chain open ai leg graph and then we're going to be using some tools and archive induct.go search we're also going to set up for our local retrieval pipeline. We are going to set up these two. This is face CPU which is what we are going to use to power our vector store. And then PyMOOPDF is something we are going to use to parse PDFs that we take from archive. And that's it. For environment variables, it's pretty straightforward. We're going to use OpenAI today. So we want OpenAI. We're also going to use LangSmith. We're not going to touch on it a lot, but we do have, you know, this integration that comes for free. You know, and the idea is it's, if you're using LangChain at this point, you should just also be using langsmith because this is all you have to do and we're going to be taking advantage of langsmith just like that so rebuild a simple chain this is kind of your your classic normal you know stock lcel chain we're going to first create a document loader which is going to load documents from archive with the query retrieval augmented generation gotta do meta rag it's always a We're going to first create a document loader, which is going to load documents from archive with the query retrieval augmented generation. Got to do meta rag. It's always a great time. Then we're going to split those documents. So we're going to chunk them into smaller pieces so that they can be more compatible with our retrieval system. And so that we can, you know, not just put the whole paper in and ask questions but we we put in only the relevant pieces uh so that we can put that in uh we also have of course our chunked documents our chunked documents are pretty straightforward they are just those split documents we're going to use those chunk documents as well as OpenAI embeddings, which is the TextEmbedding3 small model, which is a very performant and more importantly, inexpensive model. And we're going to use that to actually create our vector representations or embeddings for each of those chunks. And then we're going to set that resultant vector store up as a retriever. This is just wrapping it in that retriever interface. And that's it. The idea here is, you know, we need something that we can send a natural language query to and then have relevant documents be spit out. And yes, so that's it. The next step we're going to do is augment, right? So we have retrieval covered. Now we're going to cover augmented. And augmented is just this prompt, right? So we're going to take this prompt and we're going to take our question and then we're going to augment with some context. We're going to ask our LLM, hey, you know, using this context, answer the query. If you cannot answer the question, you know, please respond with I don't know. So the idea here is we're going to augment our prompt with our context. And so that's where augmented comes in. And so all's where augmented comes in. And so all we got left of course is generation. Generation is the easy part. We just point this thing at a model. Once we've pointed it at a model, we're ready to create our LCL chain, right? So I've created the LCL chain. I always kind of provide this, uh, uh, this absurd, uh, commenting because the LCL chains are a little bit unintuitive. If this is your first time seeing that, uh, if it's, if it's like your eighth time seeing it, I, you know, hopefully you're, you understand what's happening, but we'll just very quickly, quickly walk through it. Uh, the idea is straightforward, right? We are going to start our chain with getting our question and giving it to our retriever. And that's going to be called our context, right? So when we pass a quick query to our retriever, we should get relevant documents. So that's our context. And our question is just still our question. And then we're going to pass our context through to the next step, right? As context. And we're going to retrieve it here just so we have it, right? So we have something to reference. And then we're going to pass our question into our rag prompt with our context since our rag prompt expects both question and context. And then we pass that to our model and that's our response, right? And there you go, clear as mud, right? So that's the piece. And then we're gonna test it and it works how we expect. And we're just flexing LCL a little bit here. So we do the await and the a invoke. So this is the asynchronous version of the chain. And the reason we're doing that is just to showcase that we didn't have to do anything special. We get async for free. Async is very important when we're having hundreds or thousands or millions of people trying to interact with our application. And that's it. I mean, that's how we set up the classic chain. The only thing left to do there is to get this thing to eventually interact with land graph uh but before we do that we're gonna hear from greg a little bit more um and uh he'll he'll take us to that next step yeah thanks wiz nice little lcel flex there i like that so from classic meta rag over to our rag and function calling build note the rag with the spider emoji here is the sort of graph rag. So what we're going to do now is we're going to take the two tools that we want to have access to, archive and DuckDuckGo, and we're going to put them into this framework of function calling. Now, we can visualize that, and it looks something like this. This allows us to leverage OpenAI function calling directly just for these two tools. Now what we're actually doing today is we're taking this setup and we're also adding this additional component of our private data rag, which in our case is the MetaRag system. that but the idea here is that as your systems evolve that perhaps your older documentation hasn't been updated in a while your private data and you might want to make sure that you have access to live most up-to-date data so you're not always having to constantly recreate your vector stores and your and constantly going back and searching for new stuff in turning it into embeddings and putting it into the vector store. So this is kind of the idea here. And as we go from this OpenAI function calling setup to our more robust reasoning agent, you might say that what we have done here is we've now started to build out this agentic rag with Lang graph and Lang chain expression language. So how does this all look at the end of the day in graph format? Because we've kind of been looking at this conceptually, but what does it look like if we really wanted to take that system diagram and turn it into something that we can code up? Well, as a cyclic graph, as we mentioned, there's going to be many ways we could potentially put this together. But what we want to do is we want to put this together so that as we start, we're going to, by default, go ahead and we're going to leverage our RAG chain. By default, we will be going to our more classic rag chain setup, and we will potentially be leveraging the DuckDuckGo and archive tooling after we check that initial private database. As we go to the agent component here what's happening is we're asking ourselves do i have enough to answer the query or do i need to go search additional papers on archive or do i need to go ahead and query DuckDuckGo and general internet search to find the answer of what I'm looking for? And within that loop, we will stay until we get something that is good enough that we decide we don't need to continue. And we can end and close the loop. we decide we don't need to continue, and we can end and close the loop. So we're gonna go back to Wiz now to talk about how to build this graph, rag, and function calling piece, and then to do the final assembly of the agentic rag with lang graph and LCEL, where the initial rag chain is a single node in our graph. Wiz, back over to you. Thank you, Greg. Yes. Okay. So this is the, this is kind of where the magic happens, right? So we're going to have Lang graph. Lang graph is, I mean, Greg's talked about it. The notebook goes into a, you know, pretty specific detail about it, but the idea is straightforward, right? I mean, so we have chains and chains are basically like a baby version of a graph, right? They only allow you to flow one direction. You know, you flow from here to the next thing, to the next thing, to the next thing. Langraph, you know you flow from here to the next thing to the next thing to the next thing land graph you know gives us the flexibility to define those nodes or those units that act on our state and edges or the units that tell state where to go next right lets us have very very uh specific control of of those uh components So the idea in essence, at the end of the day, LandGraph is just shepherding state around and doing stuff to it. And so let's look at how that works in a very simple example. We're first going to create a tool belt. This is classic, right? We're going to use DuckDuckGo search and archive query. These are just tools that are offered through a LandChain community, we're going to make sure that they are able to be run. And we're gonna use a tool executor to do this. Now the tool executor is just basically a way for us to actually run the tools. So open AI or whatever, you know, whatever particular LLM you're using can't run the tools on their end, we have them on our end, and we need some way to actually execute Python code in order to run those tools. And so that's exactly what tool executor is going to do for us. Then we're going to select our model. Our model, pretty straightforward. We're just going to use chat open AI. We are using function calling. And so we are using open AI. And so we are using, you know, we are using OpenAI. That's it. There are other models that implement OpenAI as function calling endpoints. And you can get models open source that are going to help you do that if you host them. But we're just going to use OpenAI because it's the most accessible with the least lift. Then we're going to do this stuff. All you have to worry about here is that this is going to do a lot of heavy lifting for us. We don't have to define Pydantic models and then define functions and then build prompts and descriptions. We can just instead use this convert to OpenAI function with the compatible tools. And that's what we're going to do. How does this convert to open AI function with the compatible tools. And that's what we're going to do. How does this compare to auto gen? Seems like you might have flexibility for custom flow design. It sure does, Raymond. Absolutely true. And then we have this very kind of cheeky putting the state in the stateful, right? All state is, is it's an object, right? It's just something that's keeping track of stuff, right? And so we want to use state because it's important. We have to know like what happened before and use that to inform what happens next, right? And the idea is that we're just passing around this state object. It's analogous to like a more flexible, more portable, better version of that scratch pad or that message history, right? And in fact, basically all we're doing in this particular Landgraf example is we're using an annotated sequence of base messages, which is really just a list of messages, just chat history, right? And we're using this operator add. Operator add is going to make it so that we can only append messages to this state. We don't want to reset them or modify them in a way that it deletes things. We certainly will want to do that in a more complex application, but for this example, we don't need to do that. And temp equals zero because we don't want the model to imagine things. Loosely that's true. Aman, yes. The idea is, so there's a question in the chat, temp equals zero because we don't want the model to imagine things. And the answer is loosely yes. We want it to stick to the context as much as we can. And temp zero can help it do that. And then we're going to make this graph, right? So let's just, if you wanna go deeper into this, we did a length graph event, but the basic idea is we're gonna make these nodes. Nodes are just functions that take in state and they return some modification to state. So because we're using operator.add, when we return this state object with our response, it's going to append this message to our state object. And that's as easy as it gets, right? And so every node basically is going to take in state, do something to it, and then return a modification to state. That's the idea. And we do that to call our model where we just call our model on all of the messages that we've got so far. And we do that with our call tool where we just use it to invoke a tool, depending on if the LLM determined we needed to use a tool. We'll call it with its name and its arguments. We'll send it to that tool executor we just talked about. And then we'll append it as a function message. OpenAI will expect there to be a function message as the message after it suggests to use a tool. So it can use that message to determine what the response to our original query should be. And then we're going to start making the graph, right? This is not so bad. Basically all we have to do is create our state graph by passing our agent state. Then we're going to add a node called agent and we're going to make that node our call model function. We're going to add a node called action and we're going to make that our call tool function. And then this is what we got so far. Agent action, easy peasy. We're going to set an entry point so that when we first enter our graph, we're gonna enter into agent. Then we're gonna set this conditional node. Conditional node's gonna check, hey, did the last message say to use a function call? Right, if it did, we should go on to the function agent or the, in this case, the action agent. And if it didn't we're done right so and then we create that conditional Edge which just looks like this right so we we get a we get an input our agent says you know hey we need to use this tool so we continue to the action agent or it doesn't say we need to use a tool and then we just say all right we're done um there you go next you know this is almost done but right now if we get to action, we just kind of peter out, right? We stop. And so we need to get back to agent, which we can do by adding that edge action to agent. And that's going to create this cycle. So now we have this possible cycle and this exit. And there you go. We're good to go, right? We can compile that and then begin using our graph easy peasy. So what if we wanted to create this, right? Which is a slightly more complex version of that chain where we first do rag and then we see if rag was enough. If rag was enough, we end. If it wasn't, we enter into this possible agent action cycle. And if it wasn't, we enter into this possible agent action cycle. Well, there's two things we need to know. Number one, we can use any LCL runnable as a node. So you can use any lang chain chain as a node. However, we've previously discussed, right, the input to our nodes is state, and it expects some modification as output, right? So we have to wrap our LCL chain in this convert state to query, which is just going to convert the state that we have, right, the state object into something that's compatible with the chain we built earlier. And then we're going to put a parser on the end, which is going to parse that response back into the expected state format, right? And that's it. Now, this is still just an LCL chain, we can use it exactly as we used it before. It's just that the inputs and the outputs are slightly different. But this is just an LCL chain runnable, right? We've just made it so that it's compatible with the input output that that our state is going to represent. And so we do that and then it works great. We can still take advantage of the asynchronous out of the box, LCEL components, let's go huge. Now what we're gonna do is we're gonna create our agent node, same as before, action node, same as before. And we're gonna add this first action node. That's going to be our newly built LCL chain, right? So we have the LCL chain that we built up above. We're going to call that first action. And we're going to set the entry point to our chain as first action. And then we're going to build this conditional edge, right? Now, this is just the representation of that edge as a Python function. But basically, we're going to grab our state, we're going to get our question and answer from our state. So our question is our first message. And then our answer is our most recent message, right? Because all we've done at this point is pass it into our chain with a question, and we received a response from our chain. We're going to use this pydantic pattern in order to extract a yes or a no to if this question is fully answered by the response. If it is fully answered by the response, then we're going to go ahead and just go to the end. If it's not fully answered, then we're going to go ahead and continue into our agent action cycle, potential cycle. And that's all this is doing, right? You'll notice that in this conditional edge, we have a LCL chain, right? So we have an LCL chain as a node, and we have an LCL chain in our conditional edge, right? So we can really just put these kind of objects wherever is most convenient for us, which is very handy. We're going to then define that conditional edge with the is fully answered function, where we go from first action to agent, if we need to, or to end if we don't. The rest of this is the same from above. And when we use it, we can see who is the main author on the retrieval augmented generation paper, right? And we don't go to any tools. We just immediately get the answer and then we exit. We never hit the agent action cycle. Versus this question, who is the main author on the retrieval augmented generation paper? What university did they attend? We do, in fact, need to get to the agent action cycle where we're going to use our archive tool to try to find the answer to the question. And that is the idea of Landgraf. And that's how we can use it with RAG. That's how we can incorporate LCL components into it. And that's how we can even use LCL chains within conditional edges, right? So we can do it all with Landgraf. And we wind up with this nice looking graph at the end of the day, which is going to express some state machine. And with that, I'll pass you back to Greg, who will take us to Q and A. All right, yeah. So heck of a build today. Thanks Wiz. That was the final assembly for agentic RAG with Landgraf and LCEL build. In conclusion, we can remind ourselves that agents, RAG, and agentic RAG, these are all patterns. They're not specific things, and we need to be flexible in the way that we apply them. The agent pattern is simply the reasoning action pattern today. And it's as easy as that to really decide how you should go about building specific applications for your use cases. Our classic meta rag that we built with langchain expression language and chains was actually just a single node within our lang graph build. And that node was one we always started out with. We set it up so we had backup search for both archive papers and DuckDuckGo, and that was the architecture and infrastructure we chose to use. use. Remember, LandGraph is highly flexible. It is simply shepherding the state around, allowing you to do lots of different stuff. There's a million ways you could potentially build any given application, and it's up to you as the builder to work with your team and decide what's best for your users when it comes to building out agentic RAG applications. Looks like we've got some questions in the chat and in Slido. And if you have any additional questions, I'd like to welcome Wiz back up to the stage as we get into it, then please go ahead and hit this QR code and drop your questions directly into Slido. So we'll start in Slido here. And Chris, let's kick it off with, can agentic RAG expand the context window of LLMs? Say, summarize a long document section by section. Some sections may not be useful as other sections. Oh, it's a good thing I didn't immediately say no. So no, it can't literally expand the context window. However, it can increase information density in the context window. Right. So we can we can do exactly as you outlined. We can actually take our information from this you kind of huge tokeny mess that has very little useful information and boil it down to only the most useful parts. Absolutely. Yes, you can do that. Okay. Okay. It looks like let's go to this question on hallucinations in Slido. Squashing hallucinations are hard enough when trying to go to market. How do I create awesome environment aware, autonomous agents using RAG that can be trusted? That's a great question. I think if I had the answer probably, I'd be on a boat somewhere in the tropics, but no, I mean, as we evolve and mature in this space, I think one thing that we're gonna see becoming more and more important is this idea of, you know, sourcing or providing references, right? And letting humans determine if things are true or not. We can definitely ground LLMs through things like RAG and other processes, right? We can use self-reflection to, and self-refine in order to try to make sure that their outputs are in some way, quote unquote, true, but it's not, we haven't come up with a way yet that solves the problem. We just, we just have ways that we can help move, move them closer and closer to not hallucinating. I mean, the truth of it is, right? How do we get humans to, you know, that that can be trusted, that never hallucinate? And that's also a very difficult problem we've been trying to solve for as long as we've been walking around, right it's uh at the end of the day i think we're moving closer and closer to that and we're certainly approaching human level performance but uh it's not yet solved and i don't expect it will be solved in the next uh in the next few months at least yeah you know we've been talking about the the full autonomy in class and i love this i love this idea of like do any humans have complete and full autonomy outside of the confines of their managers and the business? It's like, no, that's not a thing. You know, it's like, so this idea of state machines, I think it's really important for everybody to realize that it sort of allows us to stop the full autonomy and just sort of like peek in at any given state of affairs in our application. So I want to go back because everybody's talking about it. Everybody wants it. We're going to go back to context window stuff here for a minute. So we're going to pull some context window questions from the Slido. With massive context window LLMs like Gemini, why even bother with RAG? Why not just put the entire code base in the prompt? Yeah. I mean, it's a fair question. It's worth thinking about and talking about. The idea comes down to two major points, I believe. One which is it's difficult to shuffle around objects that large cheaply. Certainly, it's true that we can just always add the entire code base into the prompt every time we want to do a query. like gigabyte, petabyte sized data repositories, eventually we're gonna run out of either room or memory in whatever machines we're using. And then secondly, this idea of, well, we'll just add it all in there, but if you have a prompt that's a million tokens long, even the papers that have come out with these models that have these impressive context windows are like, yeah, but the latency at that size is insane, right? We're waiting for 30 seconds for a needle in the haystack. And the kind of meta point to bring up there is retrieval systems are very good at extracting the information we need, right? The LLM ultimately will always answer the question correctly if we give it the correct information. And giving it all of the information doesn't necessarily mean giving it the correct information when that information could be confusing or potentially uh you know contradictory so there's a lot of different uh issues someone brings up in the chat right uh they don't remember everything so we do have this loss in the middle problem even if we solve loss in the middle well we we kind of have lost in the in between the middle then because we we might have context broken up across wide ranges in the text with through regular retrieval augmented generation is not problematic becomes problematic because the model is not likely paying attention to the very beginning of this uh the the context and the end at the same time so there's a number of different issues with these long contexts that have yet to be solved now does that mean uh they're not useful heck no they're super useful uh they will make rag better right they will make our interactions with lms better uh it's just not the case that it's going to replace rag uh as much as it's going to augment rag right now Now we'll have augmented retrieval, augmented generation. ARAG, coming soon. But that's the idea, yeah. Well, I'm also reminded of the fact that when we look at these metrics for evaluating our RAG systems, we're often choosing, re-ranking algorithms, different things on trying to make sure we don't have too much duplicative information. It's not just about things that are too divergent, but if we have the same, same, same, same, same stuff all over again, all the time, you know, it's just sort of this repetition isn't necessarily good for the generations. And, but I think, you know, as Sam and Lex were talking about this week, trillion token context windows are coming soon. Next 10 years, why not just dump in all the data from everywhere? I mean, to me, it seems like we could, that would be very useful for people that have no idea how to build rag systems, for sure. But for those of us that are builders that can quickly spin something up. Yeah. I also think someone in the chat, Olivier, said something that's very Someone in the chat, Olivier, said something that's very interesting and worth repeating. We don't, for agentic flows, right, we actually don't care too much about the long context. We want strong reasoning capabilities. And that extra context isn't going to get us necessarily stronger reasoning capabilities. And in fact, might impact negatively reasoning capabilities, depending on how long the context is. And so I think that's another piece where we have to think about like, yes, that long context is huge for some tasks, right, including ones that overlap partially with rag, but for others, it's, it's just, it's just like, not necessary and or potentially harmful. So, okay, okay. So I think what I'd like to do is I'd like to sort of end on this question about the reasoning engine GPT-4 as this sort of evaluator. Can GPT-4 be replaced as this evaluator if the question was answered or not with humans? Can we put a human in the loop there? Can that type of logic be implemented with Landgraf? Can you, I don't know, ping the product manager or ping the customer support person? Literally, yes. Absolutely. So there's LandGraph examples in their in their Docs they have a example use case where you do set a human in the loop uh you can put humans to loop at any of the components including the user right you can let the user tell you if their question was uh was answered by that just initial rag pipeline and then let it go into this more complex flow uh so yes, absolutely. That can be implemented in Landgraf. And we can let the human be the evaluator in any number of fun ways that we can think of. But yes, absolutely. We can replace we can we can even replace it with a perfect or with a preference model right from from RLHF. And we have an RLHF tuned reward model that we know is fairly performative. It aligns with our users' preferences. We can use that to determine if this is a good response or not a good response and then pass it into further parts of the chain. So absolutely. This is the flexibility of Landgraf, right? It takes in state, does something to state, and then returns a modification of state. Anything can exist between those two takes in state and returns modification state. You can put cats in there. You can put whatever you want. That's the power of line graph. Okay, okay. Two rapid fires before we end today. Why didn't you have to explicitly pass question to runnable pass through? Oh, we don't have to pass question because question is already being propagated by the rest of the chain. We're adding that context in order to allow it to pass through. So question is part of the key already in the runnable. Okay. And does DuckDuckGo offer any advantage over the Google search SERP API? Why are we using that? 100% no advantage at all. It's just free. Doesn't require you to sign up for an API key and therefore is the most accessible. Boom. Thank you, Wiz. Your insights, as always, are much appreciated. And can't wait to see you back next time. Yeah. And don't forget, guys, you know, we love you coming out. Don't forget to subscribe. Ring that bell to get notified of when we go live next. We're here every Wednesday talking about cool stuff. So, uh, uh, thanks so much, Greg. We'll see you soon. Boom. Love it. And if you like this session, then you'll probably really like our discord community vibes. Shout out to Mert for dropping in the chat. Uh, definitely join our aim community. We've got community sessions and lots of stuff going on every week. If you're the type that would like to further tinker on your own with lots of different concepts and code, you can find all of the events and all the code that we've shared publicly, directly on our GitHub with the awesome AI Makerspace or AIM Index. Check it out. And if you're looking to accelerate your LLM application development from prototyping to production, and you're one of these async self-studiers, we recently open-sourced the first LLM Ops course we ever built, LLM Ops, LLMs in Production, Cohort 1, fully available with all code on GitHub. We look forward to providing more open-source resources cohort one, fully available with all code on GitHub. We look forward to providing more open source resources and courses for folks out there around the world learning. Now, if you, on the other hand, are not so great at asynchronous self-study, if you've been spinning your wheels for a long time, or you just need to get up to date as quickly as possible, you don't want to play version control yourself. You wanna specifically get the most hands-on, most up-to-date, most current curriculum, concepts and code, and you're ready to super accelerate because this is a serious course. We call it the AI Engineering Bootcamp. We've got an upcoming cohort in just a few weeks, and we'd love to see your application come through. If you're not a fit, I will personally work with you to help you get on the path to where you might be a fit in future cohorts. Anybody can get in the game today. Finally, if you have any feedback, we'd love to hear it, whether directly from our feedback form that we'll share now or on Luma. And as always, everybody, keep building, shipping and sharing, and we will do the same. We'll see you next week. Have a great one.
Agentic RAG with LangChain
3,717
AI Makerspace
20240320
​In this event, we’ll provide a brief history of Agents before spending time on the details of building what is becoming the industry-standard agentic RAG application, namely, one with access to backup internet searches. Per LangChain's recommended best practices, we’ll use OpenAI Function Calling to build an OpenAI Functions Agent. Event page: https://lu.ma/agenticrag Have a question for a speaker? Drop them here: https://app.sli.do/event/8mC2FudkjQaogRZKusiGm8 Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/eySGueVq29mTSbN19
2024-06-10T02:14:38.693446
https://youtube.com/live/K_8a056X4ys
A-whiz. So this REFT, this new reasoning with reinforced fine-tuning, is it basically just RLHF or RLAIF wrapped in a different package? It shares a lot of similarities with those, yes. Okay. And we've also got this sort of FT, this fine tuning thing, but I thought we were doing fine tuning in the other alignment methods we already looked at. We are, but we're doing more of it, right? Aha. So if I ask sort of what is the difference between this REFT and the previous methods we looked at, what would you say? Well, we spend less time in the SFT step. So we don't actually fully supervise fine tune the model. And we allow during that PPO training, we allow our model to explore multiple options. And then we use that and we consider that when we're updating our ways. Wow. Okay. It sounds like we got a lot to untangle today. You ready to get into it? You know it. All right. We'll be back in a little bit with some discussion before we get into the demo. Thanks Wiz. Today we're going to try to tackle this new idea of reasoning with reinforced fine-tuning. It's kind of a remix of ideas we've covered before. And if you're familiar with alignment techniques like RLHF and DPO, it's going to be one you can get a handle on pretty quickly. Today, you'll walk away understanding exactly what we're doing to combine chain of thought with RL to kind of produce this interesting new higher performance way of doing math problems. And of course, we'll walk you through the concepts and code you need as usual. I'm Dr. Gregg, that's the whiz, and we'll go ahead and get into it for today. If you've got questions, please drop them either in the YouTube live chat or in the Slido link in the description box, and that we'll share right now. So let's go ahead and rock this out. Alignment with R-E-F-T. And of course, this is the continuation of our alignment series. And each time we meet, we're always aligning our aim at AI Makerspace. So by the end of this session, you will make sure to understand just how REFT can enhance reasoning capabilities using just a single set of training questions. So you can imagine that this has potential impact beyond the math problems we'll look at today. We'll get a handle on how REFT includes two primary phases, not fundamentally distinct from other alignment techniques we've looked at at the warm up and the RL training phases, and then we'll show you exactly what the code looked like that was released in the paper and how we can leverage it to do our own REFT. All right. So first we're going to kind of contextualize this with the previous alignment series, RLHF, RLAIF, DPO stuff that we've done. We'll get into REFT. We will talk about the two phases. And then, of course, we'll do the thing. And we'll answer the question by the end of the day. Does it do the thing that we really needed it to do? Spoiler alert, the answer is yeah. Yeah, it appears to. Appears to do exactly what it says on the tin. So let's contextualize the discussion. If this is your first time joining us, if you haven't seen the rest of our alignment series, you might want to check that out after we give you some trailheads today. We've covered alignment with reinforcement learning with human feedback, reinforcement learning with AI feedback, and the emerging new best practice standard, direct preference optimization. You can find all of those at this link on our YouTube channel playlist. The purpose of alignment, though, in general, that we've seen, and this is a plot from the Lama 2 paper, is to more or less reward good behavior. Good behavior. And that can be defined, and we have defined it so far, as behavior associated with being more harmless. We take helpful models and we make them more harmless. We've aligned models like in the Lama 2 paper using supervised fine tuning, SFT, and reinforcement learning or direct preference optimization approaches to kind of achieve this distribution of reward scores given a series of prompts that increases in its likelihood of producing a highly rewarded output. So this alignment, this rewarding of good behavior, it turns out you can actually align towards many different objectives. It doesn't simply have to be about being less toxic or less harmful. And in today's lesson, we'll see that this sort of alignment technique of REFT kind of straddles the line between alignment and fine-tuning. And what we're really doing is we're aligning the model using the same proximal policy optimization scheme we've seen before towards becoming better at solving math problems. How are we doing this? Well, let's break it down. The paper came out January, 2024, just a couple of months ago. And the big idea, this was from ByteDance Research. And the big idea is we're combining reasoning, in other words, chain of thought, reasoning, with reinforced fine tuning. So let's tackle the first piece here first. When we talk about chain of thought reasoning, we're referring to the paper that came out in January of 2022. This was from researchers at Google. And the idea is that when you answer a question, when you are, let's say, a middle school math student, if 1 to 500, and you get A, B, C, D, E choices, a regular kind of potentially student not trying that hard might just say, the answer is B. And then you'd be like, what? Did you just cheat how did you know that right but a student that's ready to sort of tell you exactly how they got to the answer well there are nine one-digit numbers from one to nine there are ninety two digit numbers from ten to ninety nine and so on If we add all these up, that's 1392. Therefore, the answer is B. You're like, oh, okay. You really do understand. We want our LLMs to be able to do this too. And although this chain of thought paper went far beyond math problems, we're going to focus today's lesson around math problems. And since this was work done by researchers at Google, what they did is they, in terms of this math problem, data set called the GSM 8k, which we'll look at more closely in just a moment, they took the fine tuned GPT-3, the Big Daddy 175 billion parameter model. They also took their GPT competitor, Palm, their Big Daddy model, the 540B, this was in 2022. And they they tried to sort of pin it head to head and in head to head prompting not so much. But if they included this chain of thought, this thinking through on the math word problems, they could get a much, much better score. And of course, this is how you know that young learners are understanding the math. This is how we all came up in the game. Turns out this grade school 8K math data set is originally curated by OpenAI. And you can find it at this GitHub link here. Problems like Mrs. Lim milks her cows twice a day. Yesterday morning, she got 68 gallons. And in the evening, she got 82. The next morning, she had 18 gallons fewer. After selling some milk in the afternoon, she only has 24 gallons left. If she's selling each gallon for 350, how much money did she make? And you can kind of track, okay, well, 68 plus 82, 68 minus 18, that's 50. You add them all up. You get this subtraction of the remainder, and then you get the final dollar amount she made. We want our LMs to be able to do this. And since the paper came out in 2022, we sort of see that the GSM 8K dataset, this is from papers with code, you see the Palm 540B back here in 22 did pretty well. You see that GPT-4 has kind of been crushing it. That's really not the point of today, but it's worth noting that this is at about 97% accuracy today. Models can do, the best models out there can do very well on these types of problems. And in fact, this is at the middle school level, grade school math 8K, 8,000 questions, grade school math level. Okay. So this is the idea of the chain of thought piece. How do we combine it then with this reinforced fine tuning piece? That's really the crux of the matter today. Well, here is the figure from the paper. In stage one of reinforced fine tuning, what we're doing is we're doing supervised fine tuning. Now check this out. Question X, Wang earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. Okay. Well, she can only earn 10 bucks then, right? right? The chain of thought piece of this E noted here is the piece that we're doing fine tuning with in addition to the question and answer, all right? So the fine tuning that we're doing across if you supervise fine tuning epochs is input, output, but also including this chain of thought. So we see this across a number of epochs. Simple, okay? All we're doing is we're doing some SFT. Classic. Adding a little chain of thought action to it. What we do in the next stage is the next stage is this REFT stage. So once we're nice and warmed up, then what we do is we go and perform RL on the same training data. And this is the sort of image they provide in the paper. A great figure, in my opinion. And we see this sort of question, this on policy sampling, this X, E prime, Y prime terminology here. And then we see this golden reward. Y equal, equal Y prime question mark. This is potentially a bit nebulous as to what this means straight away from the figure. And so we're going to try to break it down in the context of what we've already talked about in previous events. So that's super clear exactly what's going on. talked about in previous events so that it's super clear exactly what's going on. The idea is that this REFT starts from the warm-up position that was done with SFT and performs this reinforcement learning using a proximal policy optimization scheme. So we can ask the question, and we'll ask this a few times today, is it really that different than RLHF? Well, let us recall RLHF for a second. Step one was simply instruction tuning. We wanted to get more helpful. This is from the Instruct GPT paper. We call this training a supervised policy in RLHF. And of course, we're just doing supervised fine tuning. Step one, supervised fine tuning. Step two, train a rewards model. This was something that allowed us to decide which response came out less harmful. And so we would go through this training of a reward model piece. And finally, the policy optimization scheme where we're updating the weights relative to the reference model. You could view it like this, but as we've started to become accustomed to, if you've been following our content, we have been viewing it a little bit more like this. Prompts go in. We run them through an initial model, a reference model, and a tuned or policy model. This is the one we're aligning. You could call it our aligned model. And then we're checking if these models are close enough, not too far apart. This is that KL divergence that we're looking at. Then of course, we're leveraging our reward model that we've trained to get a reward score from the tuned policy or aligned model prompt before we update the LoRa weights in our tension layer and go through this process even more times for a given number of iterations. For more on RLHF, check out our RLHF event. We've gone into a lot of detail there. We can also recall our AIF. And we saw something similar there. We saw that in stage one, look at this, supervised fine tuning. We called this the SLCAI model because we were doing constitutional AI in this particular event. And stage two was the reinforcement learning piece. Look at that, SFT, then RL. And again, we're doing the constitutional AI. This is the figure from that paper that we go into a lot of detail on in that particular event. And of course, there was this sort of response critique revision piece that is part and parcel of constitutional AI, but a bit unnecessary for today's discussion. constitutional AI, but a bit unnecessary for today's discussion. But perhaps it's not that useful to look at everybody else's reinforcement learning, system diagrams. So let's go back to the one that we prefer. Proximal Policy Optimization Scheme for RLAIF looks something like exactly what we just saw for RLHF, only the reward model was built and trained leveraging AI. Now it's worth noting here that further RLAIF work done by Google that built on the work done by Anthropic showed that you didn't have to actually even train a separate reward model. And that was pretty interesting. And we went into some depth there in our RLAIF event. Check that out for more info. Finally, we can recall DPO. And DPO made everything even more direct, even more direct than not even training a reward model, but instead prompting an LLM to use it. We weren't even using an LLM anymore. Rather, we're using an analytical mapping. We're using a math equation to establish our rewards. That math equation looked like this loss function. We won't go through the details, but of course we have the details available in that event. Again, let's put this into context of the diagrams we're used to and the direct preference optimization scheme looks something like this. Now you sort of notice we changed the name here from initial and tuned or policy to aligned. We're talking about the same exact thing. Prompts go in. There's a reference model. There's an aligned model. We're checking KL divergence. These are not too far apart. Then we're calculating a reward score for both. We're plugging that into a loss function. We're updating lower weights. a reward score for both. We're plugging that into a loss function. We're updating lower weights. And of course, for more on DPO, you can check out our DPO event. So in light of all of this, let's reconsider REFT. We have stage one, SFT. We have stage two, reinforcement learning. Sounding pretty familiar at this point. So what we might say here is we might say, well, we're kind of doing exactly the same thing for our EFT as we were doing for RLHF and for RLAIF. We are simply putting in prompts, we've got a reference model, we've got an aligned model, we're checking that they're not too far apart, KL divergence again, it's still PPO, and we're leveraging this reward function. Also it's worth noting that in the paper implementation, they were not using a LoRa, a PEFT LoRa implementation of updates, which we'll get into in the demo here, but rather they were updating the full weights. This provides some challenges that we have to overcome in the demo that you'll see shortly. But for right now, the key is this reward function. What is this reward function in Reft? Well, the reward function is very simple and it's pretty cool that it works. For a given prompt, we're gonna generate multiple responses. And we ask ourselves, is the correct numerical answer contained? is the correct numerical answer contained? So we're gonna generate a full chain of thought response and we're just looking for is the right answer contained anywhere within that chain of thought response? If yes, reward that. That's good behavior. Good job, LLM. If no, you get no reward. Now here's an interesting little tweak as well. If there's at least one numeric character present, we give it a partial reward. So if it decides to generate at least one number, it's kind of a reward. So just to triple down on this idea, just as in some RLAIF methods, not constitutional AI. And in DPO, there is no separate reward model. There's just a reward function similar to the loss function function directly used in DPO or to the prompted LLM that you could use in some of the more streamlined RLAIF schemes. So given this context, I want to invite Wiz back up to the stage. I want to have a quick discussion here. So there's like, seems like everything's the same, but not quite the same. So the first question was, is, is this warmup thing happening? Is this just SFP? Is that, is that all that's happening? Is there something else going on under the hood here? Nope, nothing else going on under the hood. It's just SFT. I mean, the, the thing that we might think is different about it is that we don't care to train it fully, right, to completion. We're happy to just do a couple epochs. And that's why we're going to call it warm up, right? This idea that we're not going to, we don't, we don't actually care if that SFT step results in a model that performs, you know, if that SFT step results in a model that performs, you know, as, as well as it could on our task, we don't care to wait for convergence. We, all we care about is that we kind of get it heading that direction with SFT taking those big steps so that we can be ready to take those smaller steps through the, the PPO training process. Okay. Okay. So generally when we're doing SFT, training process okay okay so generally when we're doing sft we're looking for a convergence in the loss function is that right we'd like to see that yeah yeah yeah yeah that's the idea yeah that's the idea okay so here we're not concerned about that at all we're just doing some fixed number of epochs just to prime the pump kind of thing on this? That's exactly right. Yeah. Okay. Okay. All right. That's pretty interesting. And then next question. So we saw that it is the same PPO scheme. So is this basically the same exact thing? Is REFT just RLHF? Kind of, I mean, yeah, there's some, there's some, I would say, like, small differences, right? Which is that we, with the traditional RLHF, we're just going to kind of, you know, compare the distribution that our policy creates versus the distribution that our reference makes, and then make some adjustment to our weights. Whereas in our EFT, we kind of do this guided process where we actually verify if the model is generating correct responses and not just, you know, a correct response, but if it can kind of express that in different ways, if we can get kind of these combinations of correct responses which is going to lead to better generalizability on that specific task so whereas rlhf is going to laser focus us in to be very very performant at this task but you know there's maybe a generalizability trade-off uh you're going to pay for that. And REFT is trying to solve that specific issue. Ah, okay. So when we're using RLHF as we've used it in the past, I mean, we're not making generalizability trade-offs, right? We're just aligning it in general, right? To sort of anything you might use it for, or we're aligning it after we've made those generalizability trade-offs, right? It's sort of like, we're not, well, fine-tuning it. Although we are. I would say it's actually flipped. So the idea is that with RLHF, right, we're going to get very good at this one specific task, and we should see that it's very good at that specific task in those specific settings, but we lose generalizability because when we're training, we want it to be very good at that one specific task versus REFT, where we allow it to see a sample of potential outputs and to to score and adapt to those uh you know distinctly so we can have kind of uh better we're we're more likely to have better generalization from the reft method than we are from rlhf oh wow okay interesting interesting so could we use both methods to So could we use both methods to align towards other things than harmfulness? Both are yes. Okay. Not combined, perhaps, but definitely independently, you can use both to go for helpfulness, harmfulness, or you can get just better at making JSON, right? Like it's the domains that you can apply these techniques to are, you know, they're quite a bit. Okay. So to continue to try to disentangle this web of alignment, should we expect like everything else in the world that this will be DPO-ified this reft thing in the near future and would that even make sense to do for this particular method it can make sense in the offline version uh i don't know that the online version is going to make a ton of sense so i'm sure you can you can you know wrangle the math to get there uh i don't see the clearest path forward but i'm sure someone clever will be able to come up with that though i don't know if that's going to be something that happens soon or uh you know quickly uh definitely though the offline version because it is so much i mean is exactly rlhf we should be able to port it to dPO with little issue. Yeah, I mean, the reward function versus loss function thing, it seems like it's amenable. You know? OK, OK. All right, cool. I hope that was useful to everybody out there. It's certainly useful to me to keep plugging away at some of these questions as I try to get the big picture. Thanks, Wiz. We'll see you back in just a little bit to do the big picture. Thanks, Wiz. We'll see you back in just a little bit to do the full demo. So in the paper, we were told in terms of today's build that all the code was provided. This is from the abstract of the paper. And they mentioned that the code is publicly available. And they did this not just on the grade school math 8K data set, but also on Math QA and an additional math data set. And they saw consistent performance improvements. What we're going to do today and what we need to do, because we don't have that PethlORA approach built into the code that was publicly available. We're going to pick up a model that's actually an SLM off the shelf. Just a little baby 125 million parameter. This is called Galactica from Facebook. And this was trained on a large scale scientific corpus. And again, the reason we're picking this up instead of a Mistral 7b is because without PEFT-qLORA, without the quantization, without the parameter efficiency, it would just be crazy to go through this entire process with a full fine-tuning of a large language model. So we're going to use this model, Galactica 125 million. We're also going to use the data set of the Math QA. This is a large scale data set of math word problems. It's not anything very different than other math word problems you'll find out there. Although it's not going to use multiple choice answers during our SFT or REFT parts of the process. Let's just recall that what we're going to do is we're going to iterate several epochs on the training data, warm it up. Then we're going to start with that warmed up model. We're going to perform reinforcement learning via PPO using not a separate reward model, but rather that reward function that helps us decide, should we reward it? Is it right? Should we not reward it? Is it completely wrong or should we partially reward it? Does it include a Should we not reward it? Is it completely wrong or should we partially reward it? Does it include a number? And with that, I'm going to pass it over to my man Wiz to show us how to perform R E F T. Over to you, man. Yes. Thank you, Greg. Okay. So the actual code is pretty straightforward. I mean, it's given to us, so that makes it always makes it easier. But it works and that's great. We have to do a few fiddly things in order to make it work in a cool environment. And I do want to be clear that I did this on a 100 instance and we did wind up using 31 gigabytes of GPU RAM during the PPO step. So while you can fit the SFT step on about 16 gigs with the setting or hyperparameters we're using, the PPO step is actually going to be quite a bit more memory intensive. You could reduce kind of, you know, some hyperparameters even more in order to squeeze that out, but it will take proportionally longer depending on which chip you're using. So just to get that out of the way straight away, I did use A100 instance, which requires Colab Pro. After that, it's pretty straightforward. We just got to get the code from the repo. So the repo is here. Basically, you know, it shows us exactly how we would run these. And it works. It does the thing. Doesn't have specific instructions for the small model, but it's the same. We just change it to the small script versus the regular script. We're going to cd into that directory so that we can run all our scripts with the right paths. And then we're going to install some requirements. We're going to install some additional requirements that don't get picked up properly. So we'll need deep speed. This is the distributed framework they're using. The examples are meant to be run on multiple GPUs. We're changing some things so that they don't do that. But if you wanted to run this, say, on a local cluster that you had or at whatever hardware you have available to you, it is using DeepSpeed, so it'll adapt to that well. We're going to also pick up a few additional dependencies that don't get picked up properly by the requirements, and then we do need the specific version of TensorFlow GPU. And so we get it. We're gonna use wandb in order to do our experiment tracking, so we'll get the wandb API key so that we can look at our, how our training went. And then we're gonna use HuggingFace. So we gonna also do our notebook login, which is going to let us be in Hugging Face, make sure that we have access to the right repositories, and we can push anything that we want to push. So there's two steps, two steps really, right? There's this SFT warmup step, and then there's a PPO reinforcement learning step. The first of those is pretty good. So the SFT warmup is exactly what you'd expect. It is, you know, TRL, behind the scenes. So the library that we've used to do this before. And all we have to do is make some modifications to the script file in order to run it in a Colab instance. What we're going to do is we're going to change some of these hyper parameters so that they better fit our hardware we're going to go ahead and do just one epoch the paper suggested about four epochs so and you're gonna likely see better results if you do that so keep that in mind we just use one epoch to keep it light today if you do that. So keep that in mind. We just use one epoch to keep it light today, but you'd want to use three to four if you're following the paper exactly. We do have to change the model path. There's been some updates since the code was released. So we're using Facebook's Galactica 125. That's this model Greg was talking about. Basically just a 125 million parameter language model. That's pretty good at the task it's meant for, which is kind of these science-related tasks and language generation. We're going to also look at basically only doing one process. So we're not going to be using the default of eight since we're just working with our Colab instance. And everything else we can kind of leave the same. So the way you would do this, to be clear, is you would head into the directory that holds your cloned repo. We're going to go to our examples directory, our small models examples, and then we're gonna go to, we're gonna be using a Math QA today. So we'll double click on the script. And then we just wanna make sure that this file lines up with what we have here. You can just straight copy and paste this into here and it'll work great. So once we have that set, we're gonna do one more change and that's to kind of, you know, make sure DeepSpeed is aware that we only want to use one process. And so we're going to come into the DeepSpeed YAML and we're going to configure it so it only uses one process. Again, this is just due to the fact we're running it in Colab. If you're not in Colab, you don't have to worry about these things. Just set this to the number of processes that makes sense based on the number of GPUs you have in your hardware setup. After that, we just call the training script. And thanks to wand B, right? We have the ability to kind of see how training went. So we have everything from our kind of loss curve, which is, you know, looks pretty nice to our, you know, this is our learning rate as it progresses over the training. And then we also have our value accuracy. So we see this about 14%, right? So this is a, you know, this is meant to be done in four epochs, so we should see this trend up over time. Obviously with one epoch, we only have a single point. So this is the way that we're going to see it. But this is the advantage to using 1B. And you can see it kind of does what it's supposed to, right? Trains the model. We love to see it train the model. The next stage is to do the PPO step. So this is the proximal policy update or authorization step. It's the cool part, right? We have to make a few changes. So number one, because we only did one epoch, we have to change the actual checkpoint that we're pointing to. So this is, if you follow this notebook exactly, this checkpoint will be correct. But if you've made some adjustments or you say you've done it for four epochs, you'll have to make sure that you highlight the correct directory. Now that can be found through your repo. And we're going to go to a basically a directory here. And that directory is going to have our outputs and we have our math QA. And then we'll select the directory associated with the training checkpoint that we want. You can see here that in the notebook we're using epoch one, but if you did four epochs, right, you can assume that we'll be using a checkpoint that's associated with the fourth epoch. Or if you had some kind of crazy, you know, loss explosion, you might want to select a different checkpoint depending on how training went. Once we have that, again, the only thing we're changing is kind of this num processes. And we're also changing our batch sizes. So I found that with 8, we used that 32, 31.X, sorry, gigabytes of GPU RAM. So if you're on a t4 instance right you might want to uh even even have this uh so that you can fit this within uh you know the 16 gigs but for the a100 we can use eight safely not run into any out of memory issues which is what we kind of want right um you'll also see that we have kind of our normal stuff, right? So we have our KL coefficient. This is that idea that Greg was talking about where we're clipping our models together so we don't have too much drift from the base model. We don't want to lose the ability to do our task. We just want to get better at it, right? As well, we have this VF coefficient. This is useful in this specific loss that the REFT process uses. So because we're using a custom loss function, we have this coefficient that we can kind of manipulate to determine how strongly we're training the model. And other than that, you're good to go. We just let it rip. And you can see that we get kind of, you know, this final result. And we can see that these numbers are okay right off the hop we notice the accuracy is better i mean that is ideal right uh even though this is very much a kind of toy example where we're just doing one epoch uh and then we're doing one epoch of one ppo epoch in the in theL step, we still do see that that, you know, that number goes up. However, when we look at the trading, we see a slightly less clean story than we did with our, you know, obviously with our original trading. So you can see that while we did have kind of this okay-ish loss at the start, we do wind up just kind of blowing up and bouncing all around um this is to be expected right we we we only did one uh epoch of the sft warm-up and then we only did one uh epoch of one ppo epoch for the uh the actual rl step so this is you know kind of what what is to be expected. We do see that the model, it does learn at the start for the VF loss, but then we kind of blow up. So keep in mind that this is, you know, to be expected, given the small train that we did. But still, we do see that despite that, we kind of have a better, you know, accuracy on the evaluation set, right? So 10% total increase. So a not bad, right? So let's look a little bit into the code here and see kind of really what's happening behind the scenes. So there's a couple things I want to highlight from the code. So this is obviously, you know, there's quite a lot of the code, just to be clear. So we're not going to focus on it too much. But you can see that this idea of correctness comes into play, right? So this is what Greg was talking about. We were talking about how we determine our reward, right? Instead of having a reward model, we're just using this reward function, which is going to give us kind of points based on if the answer is correct, if it is a little bit correct, or if it is not correct. The idea is if we have this correct value, right, so this is basically all this conditional is doing is saying, hey, is our extracted answer the same as our target value? And if it is, we're going to reward that bad boy with a one, right? And then is, if it's not, but we have an extracted answer, we're going to reward it a little bit, right? So in the case of our model, what we're going to think about that as, right, if we generate the correct number, we're going to get one point, right? And this is to be used in our PPO steps. This is the reward we're talking about, right? So we're going to get one point if our answer is correct. And then we're going to get 0.1 points if we have a value, but it's not correct, right? So it's like, okay's okay, getting values is where we want to move to. And then if we don't have a value, we're not going to reward it at all. And I want to show you guys something that becomes apparent in the actual training process. Okay, so at the beginning of our training process, if we scroll all the way to the kind of the top here, you're going to notice that there's a lot of errors and a lot of issues with extracting an actual value, right? So you're going to see that kind of for the first like, you know, 100-ish steps, we actually have a difficult time extracting an answer at all. And that remains the case kind of throughout, right? Even into the 150th step, we're potentially running into issues where we're not able to actually get a value, right? But as we train, you'll see that this is going to occur less and less and less. And the idea here is that we are still rewarding the model for getting a value that works. And so our model becomes better over time at providing us with values. And so we get less errors. And as you can see, as we get towards the end, we do very well at this task. And so, you know, that's the kind of power of this process. Even though we're not necessarily getting super close to the correct answers, we are still getting better at giving answers, which makes it easier for us to get to the correct answers. And that's thanks to, in part, what we were just looking at here, this idea of the reward function. We're also going to look at one more piece, which is this idea that we are using that custom loss. And so when we look at our actual, you know, where we're going to be able to compute the loss, we'll see that there is a, it's a custom implementation. Basically, all we care about is that we, you know, Basically, all we care about is that we, you know, we're going to have this idea of thinking about our loss a little bit differently, right? So instead of having that kind of, you know, specific PPO loss, we want to take into account the fact that we've explored multiple paths. We have multiple different reward signals. multiple paths. We have multiple different reward signals. And so we want to compile those and allow our training to benefit from that, you know, diverse idea of correctness. And that diverse idea of correctness is exactly what lets us, you know, get more of that generalizability that we aren't getting from RLHF. So instead of just considering if this is correct or incorrect or whatever it happens to be, we have the ability to say, like, this is a correct flavor answer, but this is also a correct flavor answer, right? Even though the actual generations might be distinct, and that lets us adapt better to these tasks. And so that's kind of one of the ideas of this REFT process and what it's meant to do. And so that's it. You can see here, we just kind of combine these losses and we're gonna scale the second term of our loss by that coefficient that we set in the notebook. So this is that coefficient here. And so this is the idea, right? That's all we have to do. It's basically RLHF with extra steps, but they're good extra steps and they really help us to understand and they really help us to get our model close to where we want it to be while still being able to adapt to all of the potential tasks that could be in that space. So we're not just getting much better at one specific task, we're getting better at a general task set, and in this case, math. And we can see see even though our numbers are kind of wild and wacky, we do get to that 23.03 versus the 13 on the base model. So good job, REFT. With that, I'll pass you guys back to Greg, who will close us out and take us to some Q&A. Yeah, awesome. Thanks, Chris. Awesome to see that wand B flexing on the good old-fashioned tracking the weights and biases, the parameters actually in the model. So a little old-school throwback there to bust that one out. Really, really enjoyed that. And, you know, at the end of the day, we learned that phase one and phase two, this SFT RL thing, this is the sort of pattern that we can match across the RLHF, the RLAIF, the REFT, and it's the exact same PPO scheme. We're also taking this idea from the DPO scheme of not leveraging a reward model, but instead leveraging a simple function. And we're using that in the REFT process. The reward function, as Chris just walked us through, has very simple rules. Correct, we get that one. If we're on the path, a little bit correct, we get that 0.1. And as we come to the end of our alignment series, the future of alignment is very bright. As we've talked about, there are many ways to align towards different tasks. And we're excited to see what future techniques, what future things people will be aligning to are going to come out next in 2024. So if you have questions, I want to go ahead and throw the QR code up on the screen here. Please do upvote them and add them to Slido. Or you can, if you'd like, go ahead and throw them into the YouTube live chat. So I want to bring Wiz back up here, and I want to start off the discussion today by doing a little comparison. And this was work done at OpenAI that they released May 31st, 2023. And they called this process supervision. And this is where they went and they quote, rewarded each correct step of reasoning, not just rewarded this final answer. And this sort of was their idea on how to directly train the model to produce an entire chain of thought that was correct, rather than just an answer that was correct. And the title of this blog was Improving Mathematical Reasoning with Process Supervision. It seems like everybody's been paying attention to math problems. Everybody's been trying to improve LLM's ability to do math. How would you compare this process supervision technique to the REFT one that we just used exactly? How should we sort of situate that? Is this also sort of an alignment technique happening here? Yeah, definitely. I mean, it's part of it's kind of part of that ecosystem, right? I wouldn't say that they're exactly the same or they're trying to achieve exactly the same goals, but they definitely do have a lot of overlap. And there's an argument to be made in either case for which is the kind of of correct you know uh way to do this i mean as you said kind of math math problems right uh doing more and more complex math is is a goal that we have in our uh you know that we're working towards in a number of different ways. I think where REFT comes in is this idea that we're making some key assumptions that I think pan out very well, which is that we, you know, we can't, if we're getting the right answer, then the chain of thought steps we're taking are likely correct, right? Or are in some way more likely to be correct than if we don't get the right answer. And so I think that assumption is very powerful and likely true because it's unlikely the model will get to the correct answer while meandering off a weird path, right? Though certainly we could overfit it to do that. Whereas this process by open AI is more about making sure that we have the ability to reason through problems. And so when it comes to that generalizability trade-off, this approach is making very little concessions when it comes to generalizability, because we're actually learning how to think through problems as opposed to learning how to get to the correct answer. And so I do think that this, what we have on the screen now is ultimately more flexible, but that might make it a little bit less powerful, especially on the benchmarks that we're looking at today. Yeah. Yeah. Okay. All right. And I noticed, so yeah, that was just sort of a thought in my mind that we'd seen this before. Everybody's focused on math and it's not exactly clear how to do this right. I do like the sort of generalizability comment on thinking about process versus outcome supervision here and definitely an interesting space to continue watching. I wanna move to sort of double click in on this sort of rewarding of the partials that we're seeing in REFT. So I know Forenz asked in the chat and we have a question in Slido on sort of like this idea of exact match. And when we're talking about this on a token by token basis, it's like, we're looking for a number? Is that what we're doing? And what if it's a big number? How many tokens is that, that we're actually looking for in a sequence or it's it's just like if it's completely the right answer in the exact sequence you get a one if you have any number whatsoever in any token string you get a 0.1 is that really all it comes down to and is there a way to sort of expand on this with this idea of semantic similarity that we've been talking about in other events versus exact match? So, yeah, I mean, basically, so it is, we have to be careful of the language here because we don't, behind the scenes in the reward function, we don't care what tokens at all. We convert it to a Python float, and then we see if that float is the same as another float. So the actual token doesn't matter there. We do use our log probes to update, so it will be backported to tokens. But the idea is when we're doing this exact match is literally just seeing if the two floats are the same, right? The idea of having a different, you know, number being marked as partially correct is as I was going through the demo, right, showing that at the beginning, we barely even get comprehensible answers, right? We don't get values even. And so we want to reward the model because we want to continue to nudge it to like, hey, by the way, you should be producing a number at the end of this. Like, I should be able to get a value from you. And if I don't get a value from you, you're doing it wrong. And so we want to nudge it in that direction, right? So that over time, we're always producing our expected outcome, which in this case is values. But we don't want to reward it significantly, because we don't want it to be happy that it's just generating values. And so this is the idea that the correct value is, you know, significantly more rewarded than the incorrect value, even though we still want to nudge it that direction. The nudging. So the nudging sort of builds on this warmup idea that we had. It's sort of like we're, we're warmed up, we're moving down the path and we have to keep moving down the path. It's just one or zero doesn't allow us to do that. But the little nudge-a-rooney, the point one, okay, you're doing something right. You know, you're crawling your way to an answer. If you ask a kid, right, what's two plus two and they say five, you go close, but wrong. Yeah. You know, and it's the same thing for the LLM. Whereas if you say what's two plus two and someone says like apple, definitely wrong. Exactly. and on the idea of semantic closeness um that could be very useful for other domains absolutely right uh if if what we care about is semantic relatedness then we're kind of moving back into a traditional reward model system less so in this reward function space. The issue with math, right, is that the incorrect response will be incorrect every time. And there's not really a benefit, right, to rewarding, okay, well, you're close. Well, we don't care that you're close, right? In math, it doesn't matter that you're close to the right answer. It's either correct or incorrect. In math, it doesn't matter that you're close to the right answer. It's either correct or incorrect. And so I think we're less likely to see like a plus minus 5% close to the number consideration. But for other tasks, absolutely. Or perhaps in approximations, absolutely useful as well. Yeah. And I mean, even closeness when you get into the process supervision versus outcome supervision, the closeness is like which part was close. That's right. Exactly, right? And it's very, very unclear how you would define even being close on any given large problem set that span many domains okay so uh here we are wrapping up our alignment series so we went through rlhf we went through rlaf went to dpo now reft um what are your top level takeaways and what do you see on the horizon for alignment? And is there anything that you're paying attention to that the rest of us should know about? DPO and all of its variants are so cool. They're so good. They're more stable. They're easier to implement. Blah, blah, blah, blah. We go on forever. And getting in the last function allows us to tinker in a way that people are very used to in ML in these domains, right? So RL folks are very used to dealing with their processes, but we're very used to the loss function. And alignment is going to become table stakes at some point right uh it makes the model better and it's relatively low cost uh i think we're gonna see more and more of these techniques used and we'll also start to get kind of more evolved data sets that help us kind of get it to a specific point that it should be uh right so it's like here's your harmfulness helpfulness uh you know toxicity bias whatever x math math yeah json right like yeah yeah yeah right specific data sets you need we're gonna see more of those proliferate as well and uh at the end of the day you know in in a year i imagine this is just a part of the process. You know, it's just how we do data preparation and we do fine tuning. Then we do evaluation. We're going to sneak in a little alignment step there because we know we can get the model close. We really want it to be exact, right? And that's what alignment helps us to do. Okay. Okay. Yeah. Love that insight. So we don't have any more alignment methods. We really want to cover on the docket, but we'll keep our eyes peeled for things beyond DPO that we think are important for you guys to pay attention to. Just one quick final question, tactical question that came in last minute. Can the reward be negative in this method, in this ref method? I don't think we want it to be. No. There's, there's a, it doesn't seem useful in this specific case to have a negative reward. I I'm sure that there are processes that you would, would want to negative reward, but in this case, the zero is zero is fine yeah yeah like like how how bad is it to get the wrong answer on a math problem right something like that yeah it also does weird things with the the rest of the calculation so we just want to leave it as some non-negative uh value between i i don't think it has to be between zero and one but zero one's very very easy to work with. Love it. Okay. All right, Wiz. Well, that's a wrap. Thanks so much for your insights today. And it's time to go ahead and wrap up everybody. So thank you so much for joining us for another Wednesday live sesh. If you enjoyed it, go ahead, like, subscribe, and ring that bell for notifications about what's going on and the latest and greatest. You can find us every Wednesday on YouTube at the same time, same place. If you like this session and you're not yet part of our community, you should join today and start building, shipping, and sharing with folks aligned along the same path as you. And we try to make it easy to do that. In fact, we recently started open sourcing our six-month-old courses. This is something that we look forward to continue rolling out. If you haven't checked out our LLM Ops, LLMs in Production Cohort 1 course on GitHub. You can start learning there directly to get up to speed on your LLM application development from prototyping to production. If you're ready to like super accelerate beyond what you can do open source from our resources, we've got more stuff coming up in Q2 that is gonna be open sourced as well. Then please do check out our AI engineering bootcamp. We've got a couple of cohorts coming up. You get the opportunity to work directly with the whiz, myself, AI Makerspace peer supporters, and a group of engineers and engineering leaders that really are pretty legendary. Every single cohort. Certifications open up opportunities with hiring partners and consulting work throughout our community. And then finally, if you have feedback on today's event, please let us know. We'll drop a feedback form in the chat. You'll also get a ping from Luma. And until next time, keep building, shipping, and sharing, and we'll do the same. Thanks so much, everybody. Have a great week. See you soon.
Aligning LLMs: ReFT
3,630
AI Makerspace
20240314
In this event, we’ll break down the steps of ReFT, which consists of two stages: the warm-up stage and the reinforcement learning state. We’ll also discuss how the authors were able to achieve significantly increased performance on classic benchmarks like Grade School Math 8k (GSM8K), MathQA, and Simple Variations on Arithmetic Math word Problems (SVAMP) datasets. ​As always, we’ll demonstrate how to code the core aspects of ReFT yourself in a Google Colab notebook environment, and all code will be provided directly to attendees! Event page: https://lu.ma/llmsreft Have a question for a speaker? Drop them here: https://app.sli.do/event/1mMvrSnsuDv9iEvRirykwx Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/QqHnx1EovmPUfcnc8
2024-06-10T02:20:56.509046
https://www.youtube.com/live/Jp-6hyf_CoE
Hey, Wiz. So supervised fine tuning and prompt engineering are kind of on like a spectrum, right? That's right, Greg. Yes. Yeah. Yeah. And then like instruction tuning, that's also a type of fine tuning. Is that right? That's correct. And even like the chat style models that we see out there, these chat style LLMs, those are sort of another form of instruction fine tuning written with dialogue chat style instructions. Is that right? All true, Greg. All right. And then alignment techniques like RLHF, that's even kind of fine tuning? Yeah, mostly. Okay. And DPO, is that fine tuning too? Definitely fine tuning. Definitely fine tuning. Wow. So, okay. We got a lot to cover today, man. I think we're going to sort of do the fine tuning 101 today and tell the guys, tell the girls out there what they need to know to build with fine tuning without falling down all of these crazy rabbit holes that the entire space is riddled with. What do you say, man? Sounds awesome to me, Greg. Okay. We're going to have you back for a couple of discussions throughout today. So we're going to mix it up. And so we'll see you back pretty soon to sort of discuss some of these subtle nuances as we get into practical fine tuning. Thanks, Wiz. Can't wait. All right. Welcome, everybody. We are talking practical fine tuning today. And this is a big topic. By the end of the day today, you're going to sort of walk away with exactly how you can think about specific ways to build useful LLM applications with fine tuning. We're going to give you just enough theory to not get too confused with the space. And we're going to give you sort of that 101 level intro that is going to allow you to sort of decide what to do, decide how to do it, decide how to balance performance and cost. And we're going to mix it up a little bit with discussions throughout. So if you have questions live, please throw them in the Slido link or in the YouTube chat. We're going to have a number of opportunities for you to ask questions and to triple down on questions that I ask and I bring Wiz back on stage for. So let's get into it today, everybody. We're talking practical fine tuning. And the big goal of the day as we align our aim towards really understanding this big concept is to outline the most relevant 101 level theory concepts behind the common fine tuning techniques we just mentioned a lot in the intro we're not going to cover all of those because it wouldn't be as useful as we think it should be in a 101 level fine-tuning event we're going to talk about ways that you can think about fine-tuning and the core applications that we can use fine-tuning for and of course we'll show you how to do some fine-tuning with one of those examples so first we're going to talk fine-tuning in theory we're going to talk fine-tuning in practice we're going to talk about performance in theory. We're going to talk fine tuning in practice. We're going to talk about performance versus cost. At the end of the day, we want these things to be implemented in LLM applications that create real value for our companies and for our customers. So what's the relevant theory that we need to baseline our discussions with today? The first thing that I'll put out there is that when we think about fine-tuning, we don't want to think about it as a thousand percent equal, triple equal sign to supervised fine-tuning, but it is approximately more or less equal to fine-tuning, especially at the 101 consideration level. Why is this a useful mental model? Because we can sort of harken back to the paradigms of ML that we learned originally when we started studying the topic. We learned about supervised learning, unsupervised learning, and reinforcement learning. We've mentioned this in a recent event on DPO. The paradigms of ML can be thought of a number of different ways, and some of those ways can be kind of useful when we think about fine-tuning. For instance, supervised learning is the sort of data with labels paradigm, whereas unsupervised is the data without labels paradigm. Further, you can think of some of the ways that these paradigms overlap. Of course, there's the classification regression, the absolute classics of supervised learning. And it's worth noting here that we can use LLMs to do simple classification and regression problems. We can use very powerful LLM models to solve kind of simpler machine learning problems. And that's the first hint or key to thinking about fine tuning is that it's a natural progression from sort of the classical ml paradigm and of course if we think about neural networks we can use those for all sorts of things across paradigms which is exactly kind of where our story starts with the generative pre-trained transformer and the models, the GPT style models that have become LLMs today. This is a common representation of the way we can kind of think about what's going on when you train an LLM from scratch. And you'll notice that we're in the unsupervised paradigm to sort of leverage a lot of this data that we scrape from the internet or other sources that form the basis of the general ability model to do next word prediction. And then we can do supervised fine-tuning techniques, or also known as alignment techniques, like making it better at following instructions or making it better at specific tasks. We might need to change the structure of our model in some cases if we want it to perform a more classic task that's not next-word prediction. So in general, when we talk about supervised fine-tuning in this context, we're talking about sort of aligning it with the way we expect our models to follow instructions, or supervised fine-tuning with instructions, aka instruction tuning. And of course, RLHF, DPO, the alignment techniques sort of at the end are a continued method of fine tuning. Now, there are some details that matter here, and we're going to just talk about high level, a couple of them. So let's go back to consider how these GPTs originally sort of conceived of at OpenAI and other places, but certainly popularized by OpenAI. They started with this idea of the transformer and the GPT style transformer was a decoder only transformer. And this is sort of the type of architecture we see more and more today. With the first GPT model, it was really kind of using this decoder only stack and noticing that we can do a lot with this unsupervised pre-training. Now, to be fair, unsupervised pre-training is not wholly unsupervised. It's more of a semi-supervised approach. So we start to get into ambiguity here and we do not want to fall down this rabbit hole. So we're going to kind of leave that there. But we can kind of see that we have this supervised and unsupervised approach that we're leveraging for the original GPT. This combined with the transformer architecture, the idea of these attention layers. These are things that we've talked about in previous events that if you want more detail on, definitely check out our transformer playlist on YouTube. And this idea of using pre-trained models and then fine-tuning them was also a core idea to the GPT stack. And what we saw with the GPT model was that we could actually improve task-specific performance as we improved the underlying model. So this sort of pre-training, this unsupervised pre-training step kind of provided steady improvements with more continued pre-training. And it changed what was just fine-tuning that produced mediocre results in many cases to unsupervised pre-training plus fine-tuning results that immediately jumped up with very few fine-tuning steps to high- performance levels across things like sentiment analysis and specific nlp tasks gpt said you know what we want to work on actually improving fine tuning in addition to some other things which led to gpt2 andT-2 was like, actually, we can do some pretty cool stuff even without explicit supervision. We can do stuff zero shot. This was the introduction of the idea zero shot task transfer. And so the real interesting thing here in GPT-2 was, as they scaled up the approach, they said, well, actually, we can get really good results without any fine tuning, simply by prompting in the context window with a specific instruction, perhaps an example, which leads into GPT-3, which said, well, actually we can keep scaling this thing up and we can greatly improve performance without explicit supervision, using not just zero shot instruction, but one shot, two shot, few shot learning. And this was the moment where they were like, man, this task agnostic architecture, the general LLM combined with task-specific data for task-specific fine-tuning, maybe we can bust out of this paradigm. Maybe one day LLMs can actually just be given an instruction, just be given one or two examples, maybe even no examples, zero shot, just like humans can, and we can ask them to do things. Because humans don't require large supervised data sets and many hundreds or many thousands of examples to learn new tasks, why should LLMs? And it turns out that GPT-3 was even better without fine-tuning than GPT-2. And we see that continued today. The point we want to extract from this, and it's relevant to our discussion today, is that prompting and fine-tuning lie on a spectrum. A zero-shot prompt is simply an instruction. Translate English to French, for instance. Here's an input, give me the output. Cheese to fromage. You can also provide one example or many examples. This is sort of the idea of one shot, two shot, few shot learning. Once you have so many examples within a context window that you start running out of context window space, or it might actually just be more cost efficient, which we'll pick up that thread in just a moment, we might want to consider fine tuning versus simple prompting. Because as we go from zero shot, one shot, few shot, we get into this fine tuning space as we add more and more examples. Sometimes because we have to. Other times because it might make sense from a cost versus performance perspective. So if we can do it with zero shot, let's go ahead and do a zero shot. But if we need a lot of examples, that's when we might want to start consider fine tuning. And so this sort of gave birth to this idea of prompt engineering. It got very hot at this time, you know, as we got into this sort of language models or few-shot learners in 2020 paper. And so everybody started really focusing on providing clear and specific instructions. Context is something that has continued to be very, very important in the continued evolution of the space, but it's something that will leave discussion for another day. And the input, you know, we see a lot of one shot, two shot, few shot. We see a lot of thinking through of the examples, the sort of chain of thought reasoning. And then this idea of constraining the output format is also very, very important for some of the more complex applications that we'll build today. And when we talk about input and output in prompting, this is going to give us a key insight into how we can think about fine-tuning as well, because controlling the input and output is really what fine-tuning is all about. what fine-tuning is all about. In GPT-3's paper, they said, well, you know, recently we saw that pre-training followed by fine-tuning crushes it. But it still requires fine-tuning. And the whole idea was, can we do it without fine tuning? So this sort of space that it left for us as builders a few years later is like, well, when should we use fine tuning? And when should we use prompting exactly? When should I decide on one versus another? Rooting this back in the initial paradigms of ML, one way that you can think about this is if you can perform the task with classic NLP, you can probably also perform the task by fine-tuning an LLM. Now, do you really need a massively powerful model to do a really simple task likely not in that case you're probably over optimizing for performance and under optimizing for cost because everything is a trade-off right When we make decisions about how we want to build applications. Here's an article I found on LinkedIn that was written just a few weeks ago. It's called the AI Cost-Benefit Analysis here. And it looks at prompt engineering, fine-tuning, and pre-training. We're not going to really discuss pre-training. We kind of touched on it a little bit. I'm not exactly sure what we're supposed to get from this chart, but you can see that cost increases. And you can also see that performance increases as we go from zero shot to few shot to fine tuning to pre-training. We see this steady increase of both. So how should we think about this? Well, we can look at some numbers. Here's another recent article, if you do some Googling, that you might find, where this is looking at the cost of fine tuning an open source model like Llama 2. The total cost. And you're like, wow, those are big numbers. And we can see people draw conclusions like the cost increases by orders of magnitude once you want to fine tune the open source models. Is this precise enough as builders for us to be making decisions based on? I mean, all of these are massive numbers. Do we really need to spend that much money? This isn't the only way to fine tune, of course. It's not even the only way to fine tune open source models because the space continues to evolve. If we think about the cost of fine tuning, let's say on GPT 3.5 Turbo from OpenAI, we get this sort of per token cost structure for training, aka fine tuning, for input and for output. We also are seeing the emergence of tools like Gradient. This is from Gradient's website today. And Gradient allows you to fine tune and to host open source models through their API, similar to the way you would do it through OpenAI or Cloud or any other closed source API. And we again see this per token pricing. Very interesting. So there's a lot of stuff going on here that is kind of hard to square exactly. When do I need fine tuning? Well, before we open this up for discussion with Wiz, let's just think about how when we move from prompt engineering to fine-tuning, we're kind of optimizing the LLM's behavior against a specific task. That's what we want it to do. We want to constrain its behavior from the general everything to something more task specific. And there's a number of different ways to think about this idea of task, but structurally what we're doing when we move from prompting to fine tuning is removing examples, one shot, two shot, few shot into the actual training of the model. So we're rewiring the neurons and the weights. We're recalculating the weights, the parameters within the model so that inference costs the same amount, but I don't have to put as many tokens in on the front end. I don't have to put as many tokens in on the front end. And this has implications for how we would make a decision based on cost. So we're kind of focused in on how the model should act, how the model should behave. And with this, I want to sort of bring Wiz back up onto the stage to have our first little discussion here and if you guys have questions go ahead and uh shout them out in the chat but i wanna i wanna sort of ask like there's all this stuff out there there's all these articles on cost i'm not exactly sure what to make of them um when you think about the trade-offs between using simple prompting and fine-tuning how are you thinking about this wiz I mean, one of the first things I think about is this idea that, you know, there's the per unit call cost or however you want to define it. And then there's the actual cost of fine tuning, right? So prompt engineering is awesome. It is something that we can kind of include without every prompt. It's very flexible, but it does mean we're adding some fixed amount of context to our context window per call, which results in greater costs, both if you're hosting the kind of inference on your own, on your own, or if you're using the, you know, API is like open AI, etc. So there's this idea that, you know, there's these two kinds of costs that we want to keep in, in mind, when we're, when we're thinking about our, our costs for fine tuning versus prompt engineering. Okay. And then this kind of cost of 7B versus 13B versus 65B article that is sort of top ranked on Google recently. I'm just doing a little bit of Googling here. What are the costs of fine tuning? Is that the right way to think about it? I mean, aren't there some techniques out there that people have heard of like PEFT and LoRa that can maybe help reduce this cost a little bit? Like what's going on there? I mean, should I just sort of trust these big numbers at $1,300 to fine tune a 7-beat? Does that sound right to you? It depends and it varies so much that I don't think that there's going to be a single source or single resource that you can look to that will really actually tell you kind of, you know, generally the cost. You can use it to understand, say, relative cost, right? So bigger models will cost more. How much more is going to depend very heavily on the kinds of fine tuning that you use. And keep in mind, right, heft parameter efficient fine tuning is all about methods that allow us to fine tune models without worrying about as many parameters. So when we scale into these kind of very large language models, right, sort 70 to 100 plus billion parameter models, I think it's important to realize that we do have ways to quote unquote cheat out, you know, additional cost savings by using parameter efficient fine tuning methods. So it's hard to find a single source of truth in terms of cost, though you can generally still assume big model will cost more to fine tune than small model just due to the nature of the fact that we need. Even if we use parameter efficient fine tuning, we're probably going to be training more parameters with a larger model. So even if we use parameter efficient fine tuning, we're probably going to be training more parameters. So this idea of training all the parameters versus training just some of the parameters, this is a key distinction too, for people to keep in mind, isn't it? Absolutely. Yes. Without a shadow of a doubt. It is so important to understand that none of this training is going to happen in a vacuum. And because it's not going to happen in a vacuum, these kind of vacuum charts aren't really going to help you decide what costs you're going to incur. And that's going to come down to doing some research and some experimentation to figure out, you know, what's the best strategy. But almost never are you going to want to train all the parameters in your model uh in this day yeah yeah okay and like we we can't be sure like when open ai is fine tuning for us whether they're doing some sort of parameter efficient fine tuning technique or a full fine tuning. And we can't be sure if it's behind an API in general, if people are doing parameter efficient or full fine tuning. But do you have any guidance on how to try to benchmark and baseline performance as you're doing fine tuning for people that are starting to dip their toes in? Like, is there an amount of money that you think you should be spending on an initial test to see if fine tuning does the thing? You know, I know we do a lot of stuff in class and it's relatively cost effective. What insights can you provide to the audience that that might help them out if they want to dabble a little bit in fine tuning yeah I mean if you're going to be fine tuning like a production use case you're probably going to be dropping like a thousand bucks on the fine tuning process uh you know I I can see situations where it might be less than that but uh ultimately uh you're going to hit that 1K pretty easily, especially if you're using kind of these larger models, you know, 13B plus. And that just comes down to the fact that we need GPUs to run these things and they have to run for a while. And if we want to do like a real kind of, you know, full quote unquote, have fine tune, we need a lot of samples, right? So you might be able to get away with like 500, but you might need up to 50,000. And so to ballpark it, I say you're definitely going to be dropping 1K to get the ball started rolling. Yeah. Got to pay to play. Okay. We got a couple of questions and I think they're going to be better suited for the next discussion sesh so for now let's go ahead and keep rocking it here thanks wiz we'll have you back up in just a little bit um shout out to chris walker we are going to get back around to that one here in just a moment um if you want to know more about peft and laura check out our efficient fine tuning with peft la Laura event that we did recently. Hopefully that will help you out. As we think about building prototypes with fine tuning, we're generally doing this thing that we're pretty comfortable with. We're taking a pre-trained LLM off the shelf, and then we're sort of adapting it, right? We're sort of specifying it. We're taking this general thing. It can do lots of stuff. And then we're either specifying it to a specific behavior or task or a specific domain. Now, we've talked recently in our alignment series about how after you do this, you might as well go ahead and align, whether it's with RLHF, RLAIF, or might as well go ahead and just pick up DPO. It's becoming the industry standard. Alignment is sort of a simple thing to do after. And when we are trying to adapt it to a particular behavior or task, we're trying to sort of make it more helpful at that thing. When we align it, we're trying to make it a little bit more harmless at working with us to produce that thing. And both of these are useful and there's further alignment aspects that we'll get into actually next week in our REFT event. When we sort of zoom in on this adapting to a particular behavior, task, or domain, we can kind of break up three primary forms of fine-tuning. Now, this is not something you'll find anywhere. This is sort of our conceptualization, and we'd love as much feedback as anybody can give us on your interpretation and how useful you find this. But there's kind of the classic, the OG, training it for a particular task, right? This is not new. This goes back all the way to classic NLP and supervised learning where we're training the behavior of the LLM response. We get some sort of specific thing that we're doing. Summarization, right? Sentiment analysis. These are some of the absolute classics. Choosing which one, you know, which multiple choice option comes next in the sentence, if it's not relevant to the first part. These kind of sort of classic benchmarks or tasks. Then there's sort of this interesting space that's opened up where we can talk about constraining the input-output schema. This is less about the behavior and more about the format of the response. So for instance, you instance, we got the question in the chat from Chris, can we increase function calling performance? When you call a function, you want the format specifically to be in JSON. We can constrain the format of the output to be something that can be used as an input to something else. Right. And that's sort of the idea of function calling. There's a lot of ways we might want to constrain the input output schema and task training and constraining the input output schema are not mutually exclusive. They are not independent of one another. not mutually exclusive. They are not independent of one another. Nor is the third piece of this, where we can sort of train new language. Now, you could attempt to train a completely new language, like teach Chinese to an English-speaking model. You're probably not going to get nearly as good of results as you would if you did some unsupervised pre-training on the native language. More potentially usefully here, we're going to be using the language training piece to train the LLM's ability to interpret new words. So it's not necessarily that they were trained to understand them from scratch as an unsupervised pre-training modality, but we're sort of training the ability for them to look at input-output pairs and then draw pair, draw distinctions between words and connections and patterns between new words that are rooted in the patterns between words that it was trained on. So this is very useful in particular domains. And this is something that we'll see today during our build. This sort of all leads back to this idea that everybody's talking about today, which is smaller language models. They're like they're more efficient, they're more accurate, right? Cost and performance. You got both. And they're easier to sort of disambiguate. They're easier to sort of say, okay, how'd you get that answer exactly again? And potentially they're also easier to secure due to their smaller size and their ability to be aligned with the exact data that's used to train them. Now you can train a small language model from scratch using an unsupervised pre-training approach but you can also do fine-tuning and fine-tune something down to a smaller version. I want to sort of, before we open up this next discussion, I want to, I want to look through a couple of examples for the primary forms of fine-tuning. Task training, constraining I-O schema, and language training. And let's think about this in the, in the sort of terms of like, how you're going to like, how you're going to like, of like how you're going to like, how you're going to like if I something, okay, check this out. So if we wanted to like JSONify, right. The, again, the output format, again, we could, we could tell in a prompt, Hey, always output JSON. It's going to work. It's going to work pretty well sometimes. It's not going to work all the time. You might need fine tuning. You could also constrain an LLM to produce Docker output formats or YAML output formats, right? You'll notice that a lot of people, a lot of big R&D labs are coming out with code-specific models. Probably they're going to leverage a lot of this for their internal code generation, right? Boilerplate code, sort of a thing of the past, or at least it's rapidly becoming one. So we can sort of iffy in programming, right? We can also sort of iffy in marketing, right? Maybe we want to turn some product description, some interesting thing about a use case. We want to turn it into a tweet. We want to turn it into a direct mail. We want to turn it into a LinkedIn post. Now, in this case, we're constraining the I.O. schema. Yes, maybe we have a product coming in. Maybe we have some other sort of input. But then the output is going to be a tweet or a direct mail or a LinkedIn message. And this is something that you could fine tune. Or if we wanted to iffy across domains, right, we could sort of turn regular language into legalese. Not sure why we would want to do that. regular language into legalese. Not sure why we would want to do that. Or we could turn regular language into sort of highly intelligent doctor speak, right? Extra big words or researcher speak. And this is something that we can very much do today with fine tuning. So I want to kind of bring, bring whiz back up now. And we're going to have a little bit of a, another discussion here. It's all about performance versus cost. It's all about performance versus cost. You know, here's a, here's a question we got from a learner in class recently, who is in a large enterprise. And he says, if I'm already happy with the performance of my LLM application, but I just need to reduce inference latency, I just want to make this baby faster. Should I consider fine tuning? Yes. Yes. Well, how is that so clear to you? The kind of way that I would look at it, less so from like the specific latency point of view, but more so from the way that we're gonna utilize our hardware, right? Is if we don't have as much kind of glutting up our context window, right? So if we have like these prompts that have six to seven different examples, we can more efficiently use our hardware. We can make it so that we have more, you know, resource to handle multiple requests at a time, which is gonna lead to a decrease in inference latency. The idea again, is that especially if we're talking about a kind of per request cost, right? We're gonna be able to be more efficient with those requests if we've fine tuned all of that kind of extra stuff out of our prompt, right? And we've baked it into the model. I think that's the way that I would think about it. So you're saying that by moving the prompt, the required prompting tokens out of the prompt, you're fundamentally gonna speed up the inference. Yes, you'll wind up with reduced inference latency over many calls. That's true. Okay, okay. Yeah, that's pretty cool. So yeah, I mean, this know, here, this is just a common example of a question that we'll get that, you know, really isn't necessarily clear to people what they should be doing. And I've got another example I want to talk about here on the cost versus performance side as well, which is, you know, is it better to pick up a large domain specific model? For instance, you know, the medical domain has models like MedPalm and MedPalm 2 from Google, or is it better to sort of pick up a Mistral 7B and then teach it medical stuff through fine tuning? Like, is there an answer to this? I, there's everyone's favorite answer, which is that depends. And of course, I mean, this one's a lot easier, though, right. So what it depends on is two things, which is the, you know, how much are you saving by using a smaller model, right? So these larger domain specific models are fantastic. I mean, you can't go without saying that they're incredible, but they're very expensive to run, host, interact with, through APIs, all of this, right? Now, is it the case that these smaller models will be worse than those larger models? I mean, probably, yes yes right um but that's the second piece of information we need to consider how much worse you know it's the case that if through fine-tuning we can get to uh you know 90 80 uh on whatever our desired metrics are and we're paying 80 90 less that's a that's a trade-off that is well worth considering in most situations. And to be clear, for Ratna, we're talking about an off-the-shelf model, so not pre-training. If this question were pertaining to pre-training, the answer would be not even close. You know, pre-training is so expensive. But an off-the-shelf large model is going to potentially cost you more for very little improvement or very little enhancement to your end user's experience, which is ultimately what we care about. Yeah, yeah, yeah, yeah. Yeah, and it seems like the evolution of this space is such that the domain specific models, they're more and more coming onto the scene and, you know, they're sort of coming out also, you know, we're seeing this with the code models in particular in different sizes. So as we see sort of different sized domain specific models, this is going to make actually the answering this question even more particularly nuanced. Is that right? Yes. Just as we went through with BERT, if you can remember all the way back to the BERT days, right? When we went from kind of these big general BERTs to smaller BERTs, distilled BERTs, Roberta's, right? Then then cybert then all of the as we go through this evolution where we start building you know a hammer when we need a hammer and a you know and a screwdriver when we need a screwdriver obviously uh it's going to become a little bit more of a discussion but right now uh it's kind of like we're building the world's biggest hammer Now it's kind of like we're building the world's biggest hammer that's definitely going to get that nail in every time versus like a 100X smaller hammer that gets a nail in 99% of the time. Right? So this is the trade-off right now. Okay. Okay. And the last thing I just want to sort of talk about before we go into the build today, and if you have questions, we'll have approximately 10 minutes or so to discuss additional questions you guys have, is sort of going back to these three primary forms that we're offering up as sort of ways for people to think about fine-tuning. We've got task training. We've got constraining the IOS schema. We've got language training. We've got task training, we've got constraining the IOS schema, we've got language training. We want to remind people that it's not always just one or the other, right? A lot of times you can curate the data so that you're doing a specific task within a specific language domain. And even that you're also constraining the IO schema, right, it's like you could be doing all three at once. It doesn't have to be one or the other. How would you recommend people to think about, you know, what they should be doing when they pick up a model and try to fine tune it? Yeah, I mean, I think the wisdom of get as far as you can with prompt engineering, and when that starts to fail, fine tune, or if your prompts are astronomical in size, right? Like the context you're providing to your LLM through your prompt is like three times or four times what the actual user's queries are, right? That's when I would broach fine-tuning. When it comes to the data, though, we just need to show it examples, right? Show it examples of those behaviors. Show it examples of the language that we care about, how that language is used, what that language means, right? Those are the things that i would focus on and it always comes down to uh we're gonna need to use some combination of things some combination of of uh you know these processes that's what we're gonna see today right we're gonna have fine tuning uh but we're also going to continue to use prompt engineering because it's super useful right and it lets us do uh really flexible cool amazing things uh and so this this you know fine tuning or prompt engineering the answer is yes meme is uh i think still relevant yeah yeah yeah yeah so so we're ultimately specifying a more general capability model i, that's what we're always doing. We might not be moving towards task-specific in a super hyper definition of the word task, but we are moving towards something that's more specific and more aligned towards our particular use case. Yeah, that's right. Yeah. That is right. That is right. Okay. So you see these words are slippery. Thanks, Wiz. We'll have you back in just a minute to run through the demo. So, you know, this is the difficulty is the language surrounding this stuff is challenging and getting a handle on it, as the field evolves is also challenging. For today, what we're going to do is we're going to walk you through a fine tuning example. And we're going to use a Mistral 7B, Instruct V 0.2, a great off the shelf model, it's already trained to be very helpful across many tasks. But what we're going to do is we're going to make legalese easy. So we're going to take crazy legalese speak, turn it into plain English, and we're going to call it legal easy. And we're going to have Wiz walk us through how to fine tune MISRA 7B for legalese. Now, remember to throw those questions into the slide up or the YouTube live chat to keep the discussion rolling. Wiz, over to you, man. Thank you, Greg. Yes. Okay. So let's talk about legal easy, right? The idea is simple. We have, we interact with in a day-to-day sense, a of really dense legal uh context i mean especially when we're entering the fine fine tuning space right we're we're thinking about model licenses we're thinking about you know when can we use these things acceptable use case all this other stuff right so we're always interacting with legalese and uh it's very dense it's hard to understand unless you have a lot of background in that domain. And so we're going to go ahead and we're going to use our fine tuning today to produce basically a summarization tool, which is going to natural language and very, very terse natural languages, we'll see. So it starts, as it always does, with grabbing some dependencies. We're just going to install some dependencies. Easy peasy. Away we go. The next thing we're going to do is grab some data. The data we're going to use is from this paper, plain English summarizations of contracts. And you can see that it's in this legal summarization repo, which has a JSON, which has all of the information that we need, right? Perfect. The original text is what we care about. This is just a blob of legal text, like a terms of service agreement. And then we have a reference summary, which is a short natural language summary of the legal text. That's it. When it comes to our data acquisition, we're just going to clone the repo. And then we're going to load that tldrlegalv1.json file, which is going to have our examples that we're going to use today. which is going to have our examples that we're going to use today. We'll just load this into a list of JSON objects. And then we will basically all we'll do is we'll add those to our data set from that list. Easy. Now we have a data set. And that data set is going to be 85 objects, which have this original text and reference summary features. We can move from there to checking it out, right? So we can see the original text has text like, welcome to the Pokemon Go video game services, which are accessible via the Niantic Inc, Niantic mobile device application, the app, to make these Pokemon Go terms of service the easiest, blah, blah, blah, blah, blah. And it comes out as the summary is hi right so i mean that's a that's a pretty good summary this is just saying hello uh what we want to do though is we want to take this from this format which is hard to work with a little bit cumbersome and move it towards this instruction format as is desired by the mistral model right so we So we'll see in a second the Mistral model card expresses how it prefers instructions to be delivered to it, and that's it. Now we're going to look at the instructions prompt, right? All we're going to do is we're going to add these tokens for legal doc and end legal doc. We're going to supply our legal doc in between those tokens, And then we're going to end our instruction. And then the output will be this summary, which says hi in this case, and our end sequence, end of sequence token. So we'll create a formattable string that we'll use, we'll fill in the blanks, let's say, right? No worries there. Then we're going to move on to creating a helper function that will wrap any of our rows in this prompt format, and it will, you know, conditionally return the response. So for generation purposes, we don't want to include the natural language summary or the stop token. So we're going to omit those from this process. Then we can see that it works and we see that it works. That's great. And let's just, you know, before we go any further, let's like look at what this will do right in actual Mistral. And you can see here, this is the instruction format Mistral cares about. We can see, you know, please convert. This is our instruction. And then the output. Very long. You know, very, it's not really a summary. And it's certainly not supernatural language, right? So let's go back to our notebook and see what we can get it to do. We're going to split our data into train and test as we always do. And we'll have 76 rows in train, nine in test. Perfect. Now we're going to load our model. This can all be done on the T4 instance. I'm showing it in the A100 instance just because, you know, it's a little bit faster. But you can do this in the T4 instance. The maximum GPU RAM that was utilized was 15 gigabytes or 15.8 gigabytes to be clear. So it will fit in the T4 instance. But you're welcome to use a V100 or A100 instance if you have access to those. We're going to use some technology that we've talked about in other events. I'm not going to go into it now. But the basic idea is we're going to load this in four bits and then we're going to map it to our GPU. Easy peasy. We're going to set up our tokenizer so that we don't get as many complaints when we're doing the rest of the process. And we're going to set up our PEFT config. This is our LoRa config. So this is going to be how we tell our TRL trainer, which is what we're going to use to actually train the model or fine tune the model, what kind of LoRa configuration to use. So we're going to use 32 as our rank, 64 as our alpha, and then we're going to target all linear layers with no bias and the task is causal LM. Quick rule of thumb here, alpha is usually just twice that of rank. That's it. Then it comes to fine tuning. In fine tuning, we have, we've kind of, you know, got to the end of the line. We've set up our data, we've loaded our model, and now it's time to actually train. I have some text here that's going to help understand some of the parameters we're using so that you can go through it at your own time. But for right now, all that matters is that we set these up, right? These are hyperparameters. These settings are used to ensure that we're keeping things within the memory limits of the T4 instance. We're going to train for four epochs, right? Which is not very many. We're going to use the fused optimizer. We're going to keep everything in FB16 so that we don't have architecture issues. You know, BF16 requires the Ampere technology, which is our architecture, which is not on the T4 instance. So this is going to make it so it's still possible to use T4. And then a bunch of these parameters just come from the QLORA paper. And that's why we're using them. We'll have this optional step where you can use your Huggy Face write token to automatically push your model to the hub as you train it, which is useful for basically model versioning. Then we're going to go to our SFT trainer from the TRL library. This is our supervised fine-tuning trainer. We're going to load it up. We're going to pass in our model, the args we just set above, point it to the training data set, point it to that helper function that creates those prompts, point it to our PEFT config to tell it how to build the LoRa layers. And then we're going to add these a few flags the flags are basically saying we want to use packing to make our training more efficient and then we don't care to add special tokens because we've already added them in in our prompt creation we let this thing train it trains for four epochs you love to see it we save the model out we free up some memory so we can run it in a pipeline. We're going to merge the model just due to the Colab versioning of Transformers. We can't use a auto-pef model for causal LM with a Transformers pipeline object, so we're going to merge and unload our model, and then we're going to load it into a pipeline, and then we're going to see it on one of the test examples so this test example is basically a big long block of legal text that don't care to read we're going to pass it into our pipeline to generate from our new model our new model is going to say don't distribute our game pretty good right that's a good summary uh and the references you may not give copies of the game to anyone else or try to make money from anything we've made, which is in essence just saying don't distribute our game, which is nice. We'll look at another example. The next example is, again, a lot of stuff I don't care to read with words like jurisdiction, applicable law regulation, all very fun words, but not useful words. And this generated a summary in natural languages, don't use the SDK for illegal stuff, right? And then the ground truth is stay within the law and license agreement, which is basically the same thing, right? So this is the idea that we are able to fine tune our model to take a complex input in full of legal jargon and output a natural language description. That's very short to help us get things done. And that's it. With that, we will pass you back to Greg before we take some questions. You're muted, Greg. Awesome, thanks, Chris. You're really great. Awesome. Thanks, Chris. And that is how you do fine tuning. And of course, the how is often a little bit easier these days than figuring out the why and what. Now, if you wanted to get more details on quantization and QLORA, then go ahead and check out our event that we did on this, where Chris walks through in more detail some of the aspects that we kind of glossed over in the demo run through today. And in conclusion, where we kind of get to by the end of this is, you can kind of think about fine tuning as approximately equal to supervised fine tuning. fine-tuning as approximately equal to supervised fine-tuning. This isn't a perfect analogy that holds up at every level as we get more and more into the nuances, but it is something that's a useful way to think about it at a sort of one-on-one level. We can think about this task-specific spectrum of prompting and fine-tuning as we're building applications to always start with prompting. And then three types of fine-tuning that we offer to think about as useful ways to conceptualize what it is you're doing, task training, constraining the I-O schema, and language training. These are not independent, they're not mutually exclusive, but they are ways that might be useful as you think about problems you're trying to solve for things that you care about or in your company. And with that, we're going to go ahead and get into our final Q&A portion today. Wiz, welcome back up to the stage. We got some nice questions coming in. I'm going to just rapid fire a couple ones at the bottom of the list here related to the actual code. What is Atom Torch fused and is that target modules equals all linear new? And you're muted now. I'm muted, sorry. It does this thing. Okay, so Atom Torch Fused is just Atom W, but it's fused. All this means is that it's using specific CUDA operations to perform the operation more efficiently. So basically, instead of doing like a loop, it's going to use a more efficient way to process through the optimizer updates. Okay. And then is the target underscore modules equals all linear new? I believe it is. Yes. Okay. Astute question. And then a big question here at the top here, how much data, time and money do you think it would take to set up a domain specific model? How complex do you think this is? If you're talking about pre-training, a lot of all of those things. You know, it's you want into trillions of tokens. You want probably, you know, a 7B model. Maybe you go down to 1B because you want to be cool like Fi. But you're still going to spend a lot of time, data and money. The complexity is maybe relatively low compared to what it used to be, thanks to a lot of really fantastic innovations and tooling that's available to us today. But the rest of that stuff, data, time, and money, high set of domain-specific model. specific model. Yeah, I'm reminded of the event that we did on Smart RAG with the guys at RC that said they could sort of fine tune a larger system end to end using about a million documents, something like that. So yeah, lots and lots and lots, I think is the answer. And that's why, you know, a lot of times for our particular domains, as you see domain specific models come out, pay attention and be one of the first to pick up and leverage those. And I think that that'll be something that proves very, very useful for the companies that do it early. for the companies that do it early. And then how was the legal text formed from this PDF? Exactly. I'm not sure that that question. I don't quite understand the question. We used a JSON blob as our data set today, which basically just contains fields that have text, and that text is related to either the legal blob or the natural language summarization of the legal blob. Okay. Yep. Pretty straightforward. I want to just end on this question from Rami that we had a little bit earlier in the YouTube chat. Should we fine tune using pure pre-trained models or can we use a model that has already been fine tuned? In other words, stack fine tuning to enhance the results. This is a great question, Rami. You should always use the model that is closest to your desired behavior on your task every time. Whether or not this is a already fine-tuned model or a base model or a instruction model or a, you know, it goes on and on, doesn't matter. The idea is we want to start as close to our objective as possible and then move the rest of the way. So if a model exists that's already pretty good at what you want to do, you should assume it will be easier to make it even better at what you want it to do than a model that's like somewhere in outer space, right? And this is the whole idea of these domain-specific models, you know, or adding instruction or tasks to instruction tuned models, right? We don't want to use like a law model to create a medical model, for instance, right? We want to kind of use a regular based model. If we already have an instruct tuned version of our model, adding new instructions to it is easier than re-instruct tuning it altogether. So that's how I would approach that. Yeah. And I'm reminded of sort of the rule of thumb, like just grab an instruction tuned version off the shelf. I'm not aware of any applications that cannot deal with an instruction-tuned model and must have the base model before you start to try to get them to do your task that is ultimately going to help humans. So I mean, I think that's another rule of thumb that you can keep in mind, Rami, is just grab the instruction model just like we did today. If there's a v0.2 that is better than the v0.1, grab that one. That was a place to start. All right. Thanks, Wiz. It's time to go ahead and wrap up. Thank you, everybody, for joining us today. If you found this useful, please tell a friend, colleague, or manager about specifically what you learned and how it's likely valuable for things you're building in your context. Join us next week to talk about how to align LLMs towards a little something different, towards doing math better with REFT. And of course, like, sub, and if you've already liked and sub, please ring that bell to stay up with updates as we drop new stuff on YouTube every week. If you are seriously ready to accelerate your LLM application building from prototype to production, our next cohort of the AI Engineering Bootcamp starts on April 2nd, submit your application today. It is a tough course and it is one that is a lot of work, but it's very meaningful on the other side for a lot of folks that are taking it. If you enjoyed hanging with us today and you're not in Discord yet, jump in Discord, throw an intro down, start meeting some people, start joining community sessions, and you can start building today with all of the concepts and code that we've shared on YouTube directly through our awesome AIM Index. Or you can start learning directly through our open source courses like our LLM Ops, LLMs in in production cohort one recently open sourced course more to come on open source courses in the future so stay tuned finally if you have any feedback please share it with us in the chat or in the feedback form that you receive right now or on luma as always keep building shipping and sharing. And until next time, we will most certainly keep doing the same. See you all next week. See you next time.
Practical Fine-Tuning of LLMs
3,737
AI Makerspace
20240307
GPT-4 Summary: Unravel the complexities of fine-tuning in LLM applications at our enlightening event, designed for everyone from learners to AI engineering leaders. Discover the nuanced world of Supervised Fine-Tuning (SFT) and its pivotal role in building, shipping, and sharing effective LLM applications. This session aims to demystify the primary forms of fine-tuning crucial for enhancing user experiences, including task training, constraining the I-O schema, and language training. Dive into enterprise use cases, live fine-tuning demonstrations with the latest tooling, and a comprehensive Q&A session addressing common theoretical inquiries such as the distinctions between PEFT, LoRA, distillation, and the tradeoffs with prompt engineering. Whether you're seeking to understand the practical applications of fine-tuning, conceptualize SFT to peers, or develop a strategic approach to when and how to fine-tune your LLM applications, this event promises invaluable insights at the frontier of AI development. Join us to elevate your knowledge and application of fine-tuning in the ever-evolving landscape of large language models. Event page: https://lu.ma/practicalft Have a question for a speaker? Drop them here: https://app.sli.do/event/jxjJcjqP6ZwEhop9KQjwzC Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/AIEbootcamp How'd we do? Share your feedback and suggestions for future events. https://forms.gle/AFwLq948NNm6dh3Z6
2024-06-10T02:32:08.913282
https://youtube.com/live/Anr1br0lLz8
Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.
RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1
3,842
AI Makerspace
20240207
GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively! Special thanks to LangChain and RAGAS for partnering with us on this event! Event page: https://lu.ma/theartofrag Have a question for a speaker? Drop them here: https://app.sli.do/event/2rLa8RML994YsMQt1KLrJi Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/ryzhbvxZtbvQ4BCv5
2024-06-10T02:37:31.643024
https://youtube.com/live/XOb-djcw6hs
Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? It sure is, Greg. Yes. And is quantization like really as good and as dope as everybody's talking about? Yes. Emphatically, yes. Emphatically, yes. Man, I cannot wait to see exactly what's going on inside. You're going to show us how to do this today, right? Sure. All right. Let's go ahead and get right into it, man. We'll see you back in just a little bit. Today, we're going to talk quantization. I'm Greg. That's Chris. We're from AI Makerspace. This is a bit of an add on to last week's event, which talked about parameter efficient fine tuning and low rank adaptation. Today, we're gonna take it to the next level and talk quantization. We'll demystify the idea of quantization, and we will also talk about how to leverage the latest in low ink adaptation which is a quantized version of it called QLORA as always we'll be collecting questions with slido so go ahead and provide your questions for us throughout the day at that link and then we'll go ahead and answer as many as we can when we're through with the demo at the end. Of course, we'll have Chris back to lead and wizard his way through the demo on quantization soon, but for now, let's cover what we need to know so that's going to make sense to us. We're going to talk quantization of LLMs today, and we're going to talk fine-tuning with LoRa. This is the main goal. We want to understand and we want to align our aim to really grokking QLoRa and then seeing how we can implement that. We got a little bit of insight into quantization last time when we were loading the model but now we want to take a look at how it can be used to fine tune and some of the background and intuition associated with why this works and what the industry has sort of learned about the precision of numbers within our llms so we're going to talk-tuning quantization QLORA, and then we'll do it. And to sort of contextualize this, similar to last time, we wanna understand that often fine-tuning is coming into play after we do prompt engineering, often after we set up a retrieval augmented generation system. And we wanna now take a look at how we can optimize our large language model, or in other words, how we want the model to act, how we want the input and output schema of the model to be a little bit more constrained, a little bit more dialed in, a little bit less large, a little bit more small. And this is sort of the trend we're noticing as 2024 is upon us now. We are seeing a bigger and bigger interest in smaller, more performant language models and fine tuning is really a key aspect that's going to help us to get there so let's just remind ourselves what we talk about when we talk about fine tuning with peft laura PEFT LoRa. And why we need to do this. You know, when we talk LLMs, they're super big. They have billions and tens of billions of parameters. It's likely we'll see models with hundreds of billions of parameters before too long. Not all models are always getting bigger, but some of them are. And the reason is, is because if we keep adding more text and more parameters, we are pretty confident that our next word prediction will continue to improve. prediction will continue to improve. But as we do this, as we build larger and larger models, as we have to deal with more and more compute in order to be able to handle them, whether that's loading them, training them, fine tuning them, or performing inference on them and serving them. We're kind of abstracting away from the regular developer, the regular individual out there that doesn't have access to a giant cluster of GPUs to be able to even play with these things. And this is the core problem, is that when we go and we want to do full fine-tuning on many, many billions of parameters, this becomes a huge pain for anybody trying to use consumer hardware, any small business trying to just use the laptops that they have, maybe a few resources on the cloud. And this is as true for fine tuning as it is for loading and storing, certainly for deploying these models. It just costs too much. And the solution for kind of dealing with the fine tuning, the storing and the deploying is kind of the same. But today we're focusing on fine tuning. Today we're focusing on fine tuning using fewer parameters. It's all about using fewer parameters. We don't need all of them as we started to get some intuition into last time. And in fact, the ones that we have, what we're going to do today is we're going to take those parameters and we're going to make them smaller in a sense. We're going to make them smaller in a computational sense. This is the essence of quantization. So while it may not be necessarily fewer parameters when we talk about quantization, although it often is when we talk about fine-tuning, we're just trying to move these big, big, big models towards smaller packages through fewer parameters and through more efficient representation of those parameters. And we saw last time, we saw that LoRa is the number one PEF method you should know. It's called low-rank adaptation. And the big idea of LoRa, as we discussed, was to fine-tune using factorized matrices. And again, we didn't focus on fine-tuning absolutely everything. We did fewer parameters. That was great because it was more efficient. And we found out that we could actually leverage LoRa adapters for many tasks. So you could have one big, big model and a ton of different lower adapters and deploy that to production. Deploy each of those adapters to production because at inference is when the adapter would actually come into play. So very, very flexible, very good technique for. Larger companies and industry, especially that want to just have many adapters and larger companies and industry, especially that want to just have many adapters in one very powerful model, we'll probably start to see this emerge as an approach to AI development in the enterprise. And, you know, it's really comparable to fine tuning, full fine tuning. full fine-tuning. So, you know, we saw, in essence, that fine-tuning is all about modifying behavior of LLMs to update parameters. Parameter-efficient fine-tuning is all about fine-tuning with fewer parameters. Low-rank adaptation was all about fine-tuning using factorized matrices. And so parameter-efficient fine-tuning through low-rank adaptation is all about modifying behavior by updating fewer parameters using factorized matrices. So this all sort of flows together. This leads us directly to our new friend, quantization. And this meme is so good, I had to put it twice, because it's such an oft misunderstood idea. Certainly has taken a long time for me personally to really try to grok this thing. So let's see if we can break it down in a way that makes sense to all of you. First off, the weights of our LLM, when we talk about weights, it's the same thing as when we talk about parameters. Okay. So parameters, I might say weights, we're still talking about parameters. Those parameters are simply numbers. They're just numbers. And specifically, they're floating point numbers, They're floating point numbers, also known as floats. And it's important to understand a little bit of the detail here, because this is the essence of what we're doing in quantization. When we talk about floats, you may harken back to your days in school, maybe chemistry, back to your days in school, maybe chemistry, where you learned about significant figures, sig figs, everybody's favorite, right? And then if you're like me, you become an engineer and you don't care anymore, ever again. But I was a mechanical engineer. If you're a computer scientist, computer engineer, maybe you continue to go deeper. And these days in AI, if you're a developer, you need to continue to go a little deeper. Because this idea of a float is cool, this integer with a fixed precision, we can talk about representing, for instance, 12.345 as 1, 2, 3, 4, 5 times 10 to the minus 3. And we can then do this by using a specific number of bits in our computer. When we talk about this precision, this fixed precision, there's a number of different types of precision. What we're going to generally be using is what's called full precision when we're doing computations that are kind of default computations. Full precision means that I have 32 bits to represent my floating point number. And they're broken up into a couple different pieces here, but the big idea is that there's 32 bits. And the question is, is that the right amount when we want to go and deal with 70 billion parameter models and things like that? And it turns out that in machine learning, we found sort of over time through experiments that if we didn't use 32-bit precision and instead we used 16-bit precision, Instead, we used 16-bit precision, essentially half precision, to again, simply represent those decimal numbers that are inside of each of the neural network, that represent each of the neural network weights, sort of each of the neural network perceptrons is a way you could think about this. Then what we're finding is that we can get almost identical inference outcomes from our LLM. Because remember, we just want the words that come out at the end. We just want the ideas that come out of that. We just want the outputs. We don't necessarily care about the precision of the stuff within the black box. We put in, we get out. And a lot of people were seeing this. A lot of researchers were seeing this with the large language models, that if we just leveraged half precision we can get very very good outcomes and what this does is this effectively halves the entire model size so what are we saying we're saying that we can sort of get exactly the same thing coming out coming out, even if we represent each of the model weights using half as much information we can think about. Because really, I mean, how many sig figs do we need? And another way we can talk about moving from a 32-bit down to a 16-bit representation is we can say we are quantizing. We quantize the 32-bit weights down to 16-bit. weights down to 16 bit. Hence quantization. Now, when it comes to quantization, there are many different approaches to quantize model weights. So, this is very important. We're not going to cover every single approach because that's not really necessary for what we want to discuss today. But there are many different ways to quantize model weights, and we hope to continue to bring you more content on ways that are a little bit different in terms of their actual implementation and the nuances in future content but for today we're going to focus and use this Q-Laura idea as a focusing lens now Q-Laura starts the story begins with a paper called 8-Bit Optimizers via Blockwise Quantization. And this was a paper that came out of the University of Washington. Tim Detmers was the lead author, and he's been quite a superstar in the field.'s he's kind of like the quantization guy and in this paper they showed that you can use 8-bit representations and maintain performance that we're seeing at a level of full precision or 32-bit. So here we see in this kind of early paper, again, one of these pieces of work where they're saying, hey, look, experimentally, we're seeing that if we reduce the precision, we can still get great results. And this is not reducing it to half precision it's reducing it to quarter precision 32 down to eight and this bits and bytes approach this bits and bytes paper turned into what became the bits and bytes library which has since evolved and is something that we'll see the Bits and Bytes library, which has since evolved and is something that we'll see Chris use today, and it's something that gets used all the time now. Now, Bits and Bytes, you can go ahead and recall that one byte is equal to eight bits. We're going to continue the discussion in bits today, but you'll see many papers and discussions of things that will talk in bytes as well. So pretty simple to understand why the library was named bits and bytes. Now, again, this is one approach. And so there are some trade-offs as there are with any approach. For instance, when we use the bits and bytes approach to quantization, we're not really getting any additional benefits to our inference latency. We're not really speeding up inference a whole lot by using this particular approach to quantization. However, what we are doing is we're leveraging a tool that gives us very flexible use of those LoRa adapters, right? So for enterprise, if we're thinking about how do I have one big model and just have a bunch of adapters, this is going to be our friend. And this is why we choose to focus on this one today. And this bits and bytes library forms the basis for what comes next. It kind of forms the basis for this QLORA idea, this efficient fine-tuning using quantization. And the fine-tuning using quantization from the QLORA paper, the big bang box takeaway of this is it's super great, even though it's eight times less precise. less precise. So what we actually have going on in QLORA is we have not an 8-bit representation, but we have a 4-bit representation. And so what completely insane. And we can fit all of that on a single 48 gig GPU, single 48 gig gpu which is like just kind of incredible it's just kind of it's kind of mind-blowing that we can do this and so this q laura paper is essentially coming and saying hey hey, listen, we've got this idea that we can do fine-tuning using a four-bit approach versus even a half-precision approach, and we get amazing results. And so this is the essence of what's going on here with QLORA. the essence of what's going on here with QLORA. And so what we can kind of think about is if we go back to this idea of PEPF-DLORA fine-tuning, where we're modifying behavior by updating fewer parameters using factorized matrices. And we add this idea of quantization, where quantization is simply representing high precision numbers with low precision. Then we get to this place where we talk about PEFT-QLORA fine-tuning, where we talk about PEFT QLORA fine-tuning, where we're modifying behavior by updating fewer quantized parameters using factorized matrices. And so the process as outlined in the QLORA paper and the process that you're going to see today is something like this. We download the model weights. Anytime you download model weights from Hugging Face, they're always going to be in full precision, 32-bit. Then we load our parameter efficient fine-tuning model into GPU memory. Anytime we load into GPU memory for inference or training, we're going to be loading using that parameter efficient fine tuning method. And then we'll initialize our low rank adaptation, our LoRa configuration. And finally, and this is the key, this is the key to the whole thing, is that during training, what happens is we have the full precision 32-bit model, and we're going to actually load the 4-bit model, quantize 32-bit down to 4-bit, for training. Quantize 32-bit down to 4-bit for training. Now, during training, we're going to flow through the network, and we're going to, as necessary, each time we have to do a computation, each time we have to calculate something during our training process, we're going to de-quantize that 4-bit representation back up to a 16-bit half-precision representation. We're going to do the calculation, and then we're going to re-quantize back down. And at each step of our training or fine-tuning, we're going to quantize, de-quantize, move on. So we're never holding that half precision fully in our GPU memory. But rather, we're simply using half precision to do the calculations. This is the magic of what's really going on behind the scenes. And it turns out this works incredibly well. And again, that intuition behind the 16-bit piece is that we saw that for inference, you can go from 32- down to 16 bit and get very very good results we saw this experimentally over a lot of time not just papers from the university of washington but also papers from many other researchers and this q laura approach fundamentally Fundamentally, is to load those full precision weights into GPU memory as quantized 4-bit weights. And then only de-quantize up to 16-bit during calculation. Back down as it moves through. All right. So this is the core approach that we're going to see today. You're going to see things like this. This is the bits and bytes configuration. And you'll notice when we want to load in, we want to load in in 4-bit. You're also going to see a data type called NF4. Chris is going to talk a little bit more about it. It's very important. It's very essential to the QLOR approach. And that's it for the big ideas we need to really see how this build can be taken to the next level. So what we wanna do is we wanna take the same build that we've already looked at, the old UNO reverse card build, given the response, predict the instruction. We want to use the same model that we saw last week because it's still one of the best out there. Mistral 7B instruct V0.2. And we're going to use the same data for fine tuning. Just keep everything simple. That Alpaca GPT-4 data set is there. So again, output response, predict input instruction. And with that, we're ready to kick it back over to Chris, the wizard, to show us how to do fine tuning with PATHQ, Laura, and fill in some additional details. Wiz, over to you, man. Q Laura and fill in some additional details. Wiz over to you, man. Oh yeah, thanks Greg. Really appreciate it. And guys, I'm excited because quantization is definitely one of my favorite topics. It is the kind of like one of the best things we could do right now. And as you can see, we only used around 20 gigabytes of GPU RAM to train this 7 billion parameter model, which is quite impressive in my lens. That includes fine tuning. In any case, we'll get right into it. First of all, we're going to be using Mistral 7B Instruct V02. This is just Mistral's most recent Instruct tune model. I love it. And we're going to now move on from PEFT, which we discussed last week, into the Q in QLORA. So we discovered or we discussed, you know, the idea of how we can reduce the number of parameters that we train. But now how do we reduce the size of the parameters that we train? Now, first of all, what is quantization? Greg already talked us through it. I'm going to give a brief overview here of what's happening under the hood, and then we'll get into how to implement it in code. Spoiler alert, it's super easy. Thanks, bits and bytes. But let's look at what quantization is from this perspective. So quantization is a process of discretizing an input from a representation that holds more information to represent a representation with less information right that's crazy so the idea is we want to express more information with less information so how do we actually do that well in the tim detmer's q laura paper they rely on this process called blockwise k-bit quantization which which sounds, you know, like, very, you know, scary, but it's not so bad. It relies on two very important things. One, it relies on the fact that in neural networks, the model weights are mostly normally distributed. So as soon as we, if you're, if you're coming from a stats background, as soon as you hear that word normal distribution you you know your your eyes should light up uh you know we're we're going to be able to make use of a lot of very clever tricks uh to help us do whatever we're trying to do um and then it also relies on this idea of the nf4 format which which is a number format or data type created by Tim Detmers and team, which is information theoretically optimal. Now, not literally, it was proven this is not literally true, but it is empirically, for all intents and purposes, this is a fact that NF4 is very, very efficient, which is excellent. So how does this work behind the scenes, right? So, okay, we get it. Model weights are normally distributed. That's great. So what we're going to do is we're going to essentially put a pin in the number line that is near to the mean, right, of our desired numbers, which are going to be in a distribution. And that distribution is going to be normal, right? And then we're going to kind of use that mean as a zero point. And we're going to use this NF4 data type, which is a zero centered number format to represent the numbers that appear around that specific point in the number line. So there's a step that needs to take place here. We're going to normalize all of our numbers to be within a specific range of minus one to one. And then we're going to be able to have this idea of a saved place on our number line that we're going to understand a range around. And that's really about it. Now, it's a bit simplified and it's definitely, you know, you can look at the paper for the math. It's great. But the idea is that we have, we kind of drop a pin in the number line and we have this NF4 number format, which represents a range around that point to the number line. And that is what's going to build up the buckets or bins that we're going to use to represent our numbers. And the reason this works so well is again, because of the fact that model weights are normally distributed and because this is an informationally, theoretically optimal data type for that minus one to one range. So this is specific Yennefors for that minus one to one range for normally distributed, to one range. So this is specific, the n of four is for that minus one to one range for normally distributed, well, distribution. So that means the only reason this works is because of this first fact, right? Now, beyond just that, QLORA does an extra step. So you might have thought to yourself when I said drop a pin in the number line, right? Well, okay, if we drop a pin in the number line, that's all well and good, but doesn't that mean that we have kind of like a high precision number, right? It doesn't have to be as high precision perhaps, but it's definitely still high precision. And that's true, right? That pin we drop is high precision. Well, it can be used to represent many numbers. In this case, you know, 64 numbers from the QLORA paper. So each pin is associated with 64 numbers. Tim Demers and crew said that's not enough. You know, that's going to give us 0.5 bits per parameter of overhead, right? So we need to go bigger. So what they did is they actually took all of those quantization constants. That's the technical term for that pin that we're dropping, right? We take those quantization constants, and then we also quantize those. So we represent our quantization constants in an 8-bit format, and we do 256 of those for every 32-bit precision number. So we have one 32-bit precision quantization constant that sits on top of 256 8-bit quantization constants, which sits on top of each of those sits on top of 256 8-bit quantization constants, which sits on top of, each of those sits on top of 64 4-bit. So you can see the savings in terms of memory here is insane, right? We're able to represent so much of our data in that 4-bit representation. And we're also able to do it in a way that retains a ton of information. And that is key. I saw some questions in the YouTube chat kind of concerning, you know, what's the trade-offs here? What's the performance gains? And there definitely is some when it comes to latency. We'll discuss those as we move through the rest of the notebook. But in terms of the actual effectiveness of the model, the performance hit can be very small. It is not zero. There is a performance hit, but it's incredibly small, which makes this a very effective technique, especially when applied in the way we're going to see it applied today. So that's basically what we're talking about when we talk about this idea of QLora, right? We're talking about dropping a pin on the number line and then saving kind of numbers or representing numbers around that and then doing that one more step abstracted which is harder to visualize but there it is okay so how do we do it in code now right uh well first of all we gotta load our our kind of familiar usual suspects here so we're bits and bytes data sets accelerate uh the laura lib library transformers and peft uh these are all kind of staple libraries we're bits and bytes data sets accelerate the Laura lib library Transformers and peft these are all kind of staple libraries we're going to be using when we're using these uh kind of Q Laura tools and then we're going to grab our model and the model we're going to grab is the Mistral AI Mistral 7B instruct v 0.2 it's the most recent uh instruct model for Mistral it's a great one and then this is kind of uh you know where the magic happens this is the bits and bytes config uh this is from the bits and bytes library we're gonna see that we load in four bit so this means when we actually move our model from those saved weights uh that exist on our on our drive, when we load those into our GPU, we're going to load them in that four-bit quantized state, right? So that's that collection of numbers and then their quantization constants and then their quantization constants because we're using this use double quant, right? If we omitted that use double quant, we would only do one step, and then we would be saving less effective memory. We're also going to be using the quant type of that NF4 I talked about. That's the Tim Detmers and crew created number type, which is information theoretically optimal. Again, not literally true, but it's close enough, so we'll keep saying it. And then we're going to have this idea of a compute D type, which is going to be torch B float 16. Now this is very important, right? So when we store numbers in 4-bit, that's awesome. But when we try to compute with them, it's really bad. It's actually quite bad, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? We usually wind up with a number that is relatively needs more precision to fully accurately understand it, right? When we divide 100 by 1000, we wind up with a very, you know, a small number. And the idea is that we'll need more precision to represent that very small number. So what we do with the QLORA approach is we actually de-quantize whatever we need to compute with our weights. Now, this is done at a per-tensor level. So we never have the full model de quantized in memory, just one tensor at a time, right? So this saves us a ton of a ton of space. And it also lets us have the ability of computing as if we have this model in that higher precision or B float 16 format, right? Which is huge. So we're saving tons of space and then we're de-quantizing. So we also retain some of that compute precision. And that is what lets this method really shine, right? The fact that we de-quantize for computation and then we store in 4-bit. I think without that, this would be a less powerful method. But with that, it's amazing. You can choose up to full precision here. Obviously, that is going to come with some small memory overhead. You do have to upcast a tensor to the full precision, but it's negligible compared to the size of the model. And it does also, and this is critical, it does come with some inference and training latency overhead, right? The fact that we have to de-quantize and re-quantize, de-quantize and re-quantize, this means that we're performing an additional operation per computation. And so that is going to impact inference. Now, Tim and team have written some great kernels for this. So it's not very slow, but it is going to be slower than if we weren't doing that extra operation. And so this is one of the key trade-offs, right? We had questions about trade-offs. One of the key trade tradeoffs with Qlora and with the bits and bytes approach is that it is extraordinarily flexible. It is very powerful and it works very well with a PEFT adapter methods. So like LoRa and others, but it does cost us a little bit of inference latency in training time. So that's important to keep in mind. Once we have our bits and bytes config loaded, all we have to do now is just load our model like we normally would. So auto model for causal LM from pre-trained. We're gonna pass in our mistral AI model. We're gonna pass in our quantization config. We're not gonna need the cache and we're gonna map this to auto, which is gonna shove as much as it can into our GPU. In this case, again, because the actual model loaded only takes up about 15 gigabytes of GPU memory, it's all squeezed into the GPU there. So that's great. We do some pre-processing on our tokenizer to make sure that it's set up in the right format for training. And then we can look at our model architecture. You'll notice that we have this four-bit layer, right? This four-bit layer is where that bits and bytes comes in. You'll see that we have the four-bit layer on our QKVO proj as well as our MLP. So it's all four bit, all the way down. This is the idea, right? We don't want to just quantize some of the model. We're gonna quantize as much of it as we can. However, you will notice that we omit some of the layers, specifically we omit our layer norms. And the reason we omit our layer norms is we know that our layer norms. And the reason we omit our layer norms is we know that our layer norms are going to tend to a very, very small number, you know, near zero. And we're going to run into some training instability issues if we use lower precision to represent these layers. So we're actually going to keep those in full precision. Now they're very small compared to their weight matrix counterparts, but we do want to make sure that we're keeping those layer norms in a higher precision. This is to avoid training instability issues, right? If we have these numbers kind of diverge and cause a ruckus, right? We're not going to be able to train very well. And so that's why we don't see those four-bit layers here. Now that we have our model loaded, we can see that it's in four-bit. We're very happy about that. It's time to peftify it. We talked about peft last week, so we're not going to spend too much time on it today, but the idea is fairly straightforward. We are going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to set our alpha, which should be by conventional wisdom, about twice your rank. Though you're, you know, again, it's always worth doing hyperparameter searches here to make sure you have the most optimal hyperparameters. Your LoRa dropout, pretty consistent value. Your bias is none. Task type is causal, because that's what we're doing. You'll also notice that we have our QVK proj modules. We, again, with QLoRa, we want to target as many modules as we can, right? The QLoRa paper's wisdom is that we should actually target all possible layers of LoRa. In this case, we're just going to leave it up to PEFT to simplify things a bit for us. For our base model, all we have to do is prepare our model for k-bit training. This makes sure that we can train and that all of the trainable layers are set appropriately and that any frozen layers are also set appropriately. And then we're going to get our PEFT model and our PEFT model is going to uh give us those laura layers now you'll notice that we have only 2.7 million trainable parameters out of a possible many billion trainable parameters right and the key thing about q the q and q laura right is well is great, when we make each of these parameters one eighth the size, right, we're effectively reducing this by another factor of about eight. It's not strictly eight because of the fact that it doesn't interact with all layers, but the idea is it's about eight another factor of eight reduction in the uh in the total size of parameters that we have to train which is insane right it's uh we we went from kind of we're already at a fraction of a percentage and then we even further reduce uh the amount of actual uh work that we have to do, which is great. And then we can see here that our LoRa layers are also 4-bit, right? We have our LoRa layers are 4-bit as well as our actual, you know, regular layers that were converted to 4-bit. After that, we're going to load some data. We're just going to grab the Apaka GPT-4 data. We're going to do this Uno reverse card train, just a fun one. It's kind of like the classic now. I think this is what you're going to see. Whenever you do an instruction tune, it's just fun and it really proves the point that the process works. So we're going to ask the model to take a input and then generate an instruction. So we're going to create a model that's good at generating instructions. We're going to use this generate prompt helper function in order to create these prompts that our model will be trained on. And then we're going to set up our trainer. Our trainer, this is all boilerplate. The other big insight from the QLora paper is this paged Atom W32 bit optimizer. I'm not going to go too into it here, but the idea is that this idea of using paged memory is really, really effective, and it helps us train very stably and very efficiently with very little cost to us other than we have to flag it. The rest of this is all boilerplate, right? It's good boilerplate, but it is boilerplate. And we are going to make sure that we have our BF16 equals true, which is going to make sure that our compute D type is compatible when we upcast, which is necessary. It says CUDA, but would a Mac suffice to fine tune the model to the 4-bit? I would recommend a GPU, a NVIDIA GPU for sure. The kernels are written for it. I believe you can use 4-bit on other devices, but it's not necessarily going to be as efficient or as fast. The optimization of the kernel really added some speed to this process but i'll get back to you uh more about that uh after a little bit of digging to make sure that you can do this on mac even if it is going to be slightly less efficient uh we're going to use the trl trainer the sft trainer from trl in order to train our, our max sequence length of 2048 just for Mistral itself, and then we can train this using trainer.train. At the end of the day, we reload our model, just a quirk of path. We reload it. We make sure we load it in 4-bit, and then we have our torch D type for float 16. That's the compute D type again. And then we are going to, you know, look at the model. So we say in instruction, identify the odd one out among Twitter, Instagram, and Telegram. That's great. That is, that's an instruction that would result in this, in this, you know, in this kind of the odd one out is Telegram response. And you can see the ground truth is identify the odd one out is telegram response. And you can see the ground truth is identify the odd one out. And if we look at the base model, we can see that the base model's instruction is much less good. It does not even mention telegram. And so, not a very good instruction. But that is it for me and the code demo. So with that, I will pass you back to Greg who will wrap us up. So with that, I will pass you back to Greg. We'll wrap us up. Yeah, thanks, Chris. That was awesome as usual and love that deep dive explanation on exactly what's happening within the quantization method in the QLORA paper. So today we saw building off this PEFT-LORA approach, Today, we saw building off this PEFT-LORA approach, that PEFT-qLORA fine tuning is really about modifying behavior by updating fewer quantized parameters using factorized matrices. So this idea of using fewer parameters and of using the LoRa factorized matrix approach. This gets us from 3.8 billion down to 2.7 million parameters, less than 1%. And then we come in with quantization. This is technically blockwise k-bit quantization, effectively just allowing us to express more information with less. And the key to the QLoRa method is that from that 2.7 million parameter level we're coming in and we're starting to actually quantize that down to four bit before we we begin training during training we will de-quantize when we have to do computations and before re-quantizing to continue the training process. Next week, we are going to be covering how to not fine-tuning and loading, but now serving an inference with VLLM. So we hope you can join us for that one. But for today, we're going to go ahead and get started with the Q&A period. I'd love to invite Chris back up to the stage. And if you guys have questions, it looks like Manny is crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, throw it in the Slido. We'll also try to get to your questions if you throw them in the YouTube live chat. But Chris, let's go ahead and jump right into it here. First question. Is the reason we don't get inference latency benefit with QLORA because model weights are re model weights are retained as 32 bit during inference. I mean, I, yeah, I mean the question, uh, to be more specific about, uh, the phrasing, I think we could say that the, the model weights are de-quantized to a higher precision during inference. So yes, that is why we don't see a benefit to inference. In fact, we see a penalty. It's not a big penalty, but there is a penalty. And so, but yes, that's exactly why. Oh, okay. Nice, nice. Yeah, excellent question. Astute one there. And then first one from Manny here. When we're talking about parameters, are we referring to additional features such as Xs in the equation, Y equals predict X1, X2, Xn? Are X1 to Xn considered parameters? What are we talking about when we say parameters? Yeah, parameters, features, it's all numbers, weights. I mean, we have so many different names for similar kinds of objects. I would think of parameters more specifically as the entities that fill up these weight matrices that we use to compute when we're actually doing that matrix multiplication. But yes, I mean, essentially a parameter, a parameter is any node in the, in the model architecture, right? So this is not something that you're going to want to use with like your XG boosts or your, you know, your kind of traditional ML methods, right? It's not like a random floor forest applicable, you know, technique. It's specific to that deep neural architecture. And it's also specific right now to that transformer architecture, though there's no reason it needs to be. It is most explored in that space. Hopefully that answers the question, Manny. Yeah, yeah. Well, we'll kind of flow through some of these other questions and pop back to Manny's questions as well. I think this one's super relevant to everybody. If I don't have a powerful laptop, where can I practice these techniques? Honey, P, it's Colab. Get yourself into Colab. Colab makes it so easy. And the whole benefit of this kind of thing is we can load these very large models with very little resource. And so oftentimes, you can load like a 3 billion or 6 billion parameter model, you can load that in a free instance of Colab right using the free free tier GPU, the T four. So it's I think that's a great way to start if you don't have a powerful laptop. As you get more embroiled in the space, you might look at other cloud hosting solutions, Lambda or AWS, whatever you want. But for the getting started beginner, I would say Colab is your best friend. If you want to, you can pay for compute so you can pay to get a little bit more uh beefy gpus but uh stick to the free tier and and stick with your kind of three to six billion parameter models and you're gonna have a great time yeah yeah yeah yeah stick to the three to six bill quantize quantize quantize quantize and uh and then colab like we we teach entire courses in collab and we do a ton of fine tuning throughout so you know just try to be as efficient as possible don't sit there and do tuning for you know days and days at a time if that's not really something that you're interested in you know use small, try to make the model as small as possible through picking the small size of Hugging Face and then quantization for sure. But yeah, there should be nothing stopping you if you're a beginner. You don't have to get AWS. You don't have to get all these things. Okay, Islam, we got a question that's getting upvoted here. Can I do this fine tuning with Lama CPP? And is this fine tuning possible to plug into the end-to-end fine tuning within a RAG framework? So E2E fine tuning within RAG framework, yes, 100%. The RCAI, we've done an event with them. Their DOM framework and GitHub, we'll get a link for you guys to drop into the chat. That is 100% a great tool that does leverage or can leverage LoRa as well as quantized methods. In terms of Lama CPP, I'd have to double check. I don't know off the top of my head, but I will double check and then we can include that information in a comment if I'm unable to find it before the end of our time together today. Okay. All right. Back to Mandy's next question. We say weights and biases when we talk about ML models or neural network models. So if weights are parameters, are we saying weights and biases that are parameters in the LLM world are weights and biases parameters? Let me think through this question. world are weights and biases parameters? Let me think through this question. We say weights and biases when we talk about LLM. So if weights are parameters, are we saying weights and biases parameters? Like our bias is also parameters? I guess is that the question? No. But yes. I mean, I mean, at the end of the day, the thing we care about is the weights. That's, that's, that's, that's all answer this question. We want to update the weights, aka the parameters. Okay. All right. Good stuff. Then I'm gonna go ahead. Last manny question here. Can you speak about examples of LoRa adapters? Like, what are they? And what are they created for? a tool perspective. So let's say we create a LoRa adapter that's very good at translating natural language to SQL. And then we create another LoRa adapter. And that LoRa adapter has been fine tuned to translate natural language to Python. Then we create another adapter and you you see you can kind of go on that the idea is that whenever we do inference we can choose whichever of those adapters or those laura layers to flow information through that's going to make our output consistent with what we fine-tuned it to do so you can you can think of them as little hats you can put on your model that's going to change its behavior, but it doesn't touch the, it doesn't modify or it doesn't have to modify the base model at all. Just kind of this hat that sits on top of it, but gets it to do a different job. And the idea is that we can choose those hats as we want, even at time of inference, we can choose which hat we want it to wear. Yeah. Yeah. And I mean, you know, this is like the thing for businesses too. It's like, if you think about these adapters, man, it's like they're plug and play. And so if you want the LLM to do something super specific, that prompt engineering has only gotten you so far and you just can't get what you need exactly to get in and out in specific ways with your customer or your user. If you want to really constrain what your user can put in, you want to really constrain what comes out, this fine-tuning piece, this lore adapter piece is going to be like your friend. You know, we had a great meme that we posted on LinkedIn recently where it's sort of like if you're doing fine tuning, you're kind of doing LoRa. So it's sort of like this is a big question. You know, examples of LoRa adapters would be like anything that you fine tuned, you know, you might say and. OK, we've got a couple of minutes left. I'd like to shout out out to you know thanks for the great note just want to appreciate your efforts uh appreciate a lot it looks like we've got uh george i think he's struggling with a specific error maybe we can comment on that after the the event he's he's put his error into slido as well um i guess uh last question this is a big question. So you can take maybe two minutes, Chris, what are the trade-offs of using dimensional reduction techniques like LoRa, QLoRa, PEFT on LLMs in terms of training, inference, fine tuning? Like when you think of trade-offs, maybe best practices here, what do you think of? I mean, the big one is quality or like how good the output is uh there is a trade-off there it's really small and beyond being really small it's really small like so okay this is this is the way i think about trade-offs when it comes to laura and and the crew uh i can i can find you the laura model right to be let's say like 98% as effective as full fine tuning, right? But I can do that in a 10th of the time with a thousandth of the resources, right? So divide by a thousand, the number of resources. I mean, that is a trade-off. There is a trade, you're losing 2%. But like, it doesn't feel like a real trade off. And especially in terms of business value. It's not like a, it's not a real trade off these days, like, especially if you use a high enough R or rank in your your Laura, so you're using that kind of 128 are, you're still getting a massive reduction in compute but you're retaining so much of the performance that it it truly doesn't feel like a trade-off it there is a trade-off to be clear there is always technically going to be a trade-off but it lets you do things you wouldn't be able to do so it doesn't feel like a trade-off i I mean, for small companies, you can fine tune a model that does a whole new novel thing that fuels your business, that is your business, right? That you just couldn't do if you didn't use these methods. In that case, there is no trade-off, right? It's enabling you to do something that was previously impossible to you. That's only advantage. When it comes to inference specifically, possible to you that's only advantage uh when it comes to inference specifically both uh the the Q Laura or any quantized uh method using bits and bytes and Laura if you're talking about non-merged Laura adapters do impart a small inference latency penalty it is. At scale, it can maybe be felt, right? If you're really getting to those hundreds of thousands of requests per second compared to a very efficient model, you might want to re-quantize that to another format and serve that model directly instead of having it part of your LoRa stack. But again, these are problems that come with scale and that scale kind of also helps you fund the solution. But outside of that, you're not going to feel these issues until you're into the six figures or more requests per second for your kind of LLM stack. So I would say there are trade-offs, but when you're getting started, they really don't appear as trade-offs. All right. Yeah. Okay. So use PEFTQ, Laura, unless you got a million requests per second. Sounds like a plan, dude. All right. Cool. Let's go ahead and wrap it up. Thanks, Chris. And can't wait till next time. Thanks, everybody, for joining us today. Again, next week, we'll be back talking inference and serving and how to do it efficiently with VLLM, one of the hottest open source tools out there for doing that. We'll tell you a little bit about the tool and its background. If you like this session, you might also really like cohort four of LLM Ops, LLMs, and Production launching February 13th. In that course, which we're going to be soon announcing an expanded curriculum for, you'll learn to prototype and scale production LLM systems, including using RAG techniques, including fine tuning, and so much more. Check it out in the link. And then lastly, please share any feedback you have on today. You can drop it in the chat or you can drop it in the feedback form. That will drop to you now. And that's it for today. Until next time, keep building, shipping, and sharing, and you know we'll be doing the same thing. See y'all next week.
Fine-tuning with QLoRA (Quantized Low-Rank Adaptation)
3,710
AI Makerspace
20240111
​GPT-4 Summary: Discover how to supercharge your LLM application development by mastering quantization, a game-changing technique that dramatically reduces the size and computational demands of large language models (LLMs). In our upcoming live event, we'll dive deep into the essentials of quantization, demonstrating how it makes LLMs more accessible and cost-effective for tasks like loading, fine-tuning, and inference on limited hardware. Learn the ins and outs of using the bitsandbytes library to load quantized LLM parameters for our Mistral-7B demos, and explore advanced fine-tuning techniques with QLoRA, building on the principles of Parameter Efficient Fine-Tuning and Low-Rank Adaptation (PEFT-LoRA). Whether you're working on development or in production, this event is your key to speeding up the LLM application cycle. Code will be provided, ensuring you have everything you need to implement these strategies effectively. Don't miss out on this opportunity to elevate your LLM projects! Event page: https://lu.ma/quantization Have a question for a speaker? Drop them here: https://app.sli.do/event/7CrWMfvZg2NXWh6aYsKkfr Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our next AI Engineering Bootcamp on Maven today! https://maven.com/aimakerspace/ai-eng-bootcamp How'd we do? Share your feedback and suggestions for future events. https://forms.gle/u63yUJRD9AijuTE98
2024-06-10T02:44:23.704976
https://www.youtube.com/watch?v=w67fQ_-8hq0
Hey, thank you for joining. This will be just a quick tour through a couple of Hugging Face Spaces applications I created to help with vision language model research. Each week, there's several models that are coming out open that are a fusion of, you know, a visioner and a language model decoder in some slight variances in implementation. And it was just a lot of work to kind of pull this information together. I was trying to summarize it in a paper, but this was, these apps were easier for me. And I still might create a paper, but for now, this is meeting its purpose for me, and hopefully it's helpful for you as well. So this, you know, initial sort of display here, here are some of the key vision language models that are in use. Some of them have been around a little bit older. And this article, as you can tell, it's a couple of months old, which is like 10 years in human years. But some of the things that are missing. MoonDream is here, which is cool. It's a very small vision language model, but I do not see Halegema from Google, and I don't see Microsoft's PHY3 vision model. So, it kind of gets back to the rapidity and how quickly things are changing in the vision language model space. This blog post is excellent from Hugging Face, though, in terms of describing the high-level architecture and it comes down to image encoders, the multimodal projector interface in between, and the text decoder that's implemented by the language model. So that's an important concept as I hop into the app. So in Hugging Face Spaces, there's two apps that are created. One is a Model Explorer and the other one is the Hugging Face Extractor. So the Model Explorer, what it does, it leverages a couple of bugging face capabilities. One is Gradio, which is the user interface that you see here. And the other is the Transformers library, which is, you know, how you can sort of standard way of interacting and interfacing with a model. So that's what I use here to, you know, be able to quickly sort of introspect and see the high level of details of a division language model. et cetera, that this view was the simplest. And also it was very easy to kind of go from this view that's output from, you know, the Transformers library into something more, more usable and professional. But anyway, this was a very quick head start. So here, this model that we're looking at is 53 Vision. And if you think back to what we saw before in terms of the standard layers for a vision language model, you have the vision encoder and then you have the projection layer or the interface, and then you have an LLM that it talks to. So in terms of Microsoft's model it uses a clip architecture which is slightly different in terms of the next one we'll look at from Pellegema's model and let's just leave it at that for now okay and so this is a different model here you go let's just go ahead and you can see it at work so i'm using the examples to drive this and so you know very quickly it's able to introspect the model based on the model definition. And here this is the Google's vision model. It's using Siglet for the vision decoder. It's using like a single linear layer in terms of at the projection or interface layer. And then at the language model layer, it's using Gemma. So often what I do when I'm trying to compare it, I'll put two models side by side on the screen and sort of compare interactively. Just to show the code a little bit, and this is where some of the value came for me in terms of rapidly iterating and kind of understanding what's required to interact with the various vision models. And I mean, if you're trying to thinking of being being plug and play and sort of moving sort of Lego pieces around, this is a very quick way to understand how you can best interact with the model. One of the key discoveries for me in terms of going through and digging in is that, you know, the secret sauce of, you know, what helps you interact with a model is in the config.json based on the standard Transformers architecture. I don't want to belabor the point too much, but those of you that have worked with plugging face models will kind of understand the value of this. You know, most models, particularly non-vision models, you'll be able to load with a standard class. But there were a few that I found that used a specialized class. And in those cases, you know, if the architecture, and this is kind of what was pulled from the config.json, if the architecture was of a specific type, like, you know, if it's a lava versus polygema versus edifex, this is a vision model from Huggingface. That anyway, it, for those that it did seem to have an issue with using the standard class, I was able to just quickly sort of switch it out to use something, um, use a more specialized class to load the model and to be able to view the architecture. The other quick lesson learned from, you know, dealing with the PHY vision model and also just the PHY language models was that by default they're using flash attention and so just by putting this logic here it's able to leverage flash attention appropriately when required so that's a quick sort of walkthrough of the model explorer i think is as you kind of go through you'll kind of understand sort of through, you'll kind of understand sort of what I was trying to do with this in, you know, easily extend it to your specific use case. You know, things change so quickly that for me that one of the essential things that helped me was that, you know, you always get errors and this is kind of like an error panel and this is one of my favorite ones that i wanted to get working but it was throwing an error and it has to do with the specialized class that it declares not being found in the transformers library so for me that's kind of my next sort of thing to troubleshoot but that's one of the other things. In terms of approach, errors are great. You learn lots from them. And in this case, I was able to catch the error and throw it here. The other thing that in terms of Hugging Face Spaces is that it provides an easy mechanism to be able to look at the logs. If it's not your space, you may not be able to see it. So if you clone it locally under your space, you should be able to get this working quite quickly as long as you're having a Hugging Face Pro account. So next, let's hop into the extractor. For me, as I said, some of the key learning in terms of understanding for the Model Explorer, how I can best determine what class do I need to use to be able to leverage the specific vision model or even large language model and so this gave a mechanism for me to like easily search right now it's focusing on space so all right let's take a look at a okay so this is taking a look at a model and now that i know that config.json has a lot of secret sauce you can search okay so in terms of the config.json it calls out right away that the specialized architecture or class to use with you know aligema is this aligema for conditional generation and Pala Gemma is this Pala Gemma for conditional generation. And, you know, anyway, so that's kind of the value of this. Also, one of the things that I use LLMs frequently for is to help do analysis. So even if I'm using like the UI chat GPT version, I can copy this and paste it into chat GPT and say, Hey, introspect this architecture. You know, how can I use it to do X, Y, Z, or, you know, if I copy the extract from two, I can say, Hey, compare these. So there's certain sort of shortcuts I took in terms of focusing on files of a certain size, because like for me, I was primarily interested in small files like config files so that I could use it for these types, this type of analysis easily. So if you want to customize it for your purpose, you can clone the repo and adjust as appropriate. You can clone the repo and adjust as appropriate. And so the other thing that was really powerful, like particularly with Gradio, does some cool things in terms of, you know, ability to create a very quick UI or multimodal use cases. As you can see, some of the icons may be a little bit small. You can use a webcam. You can, I guess, audio is enabled for this one, but it's very easy to do it. And I've found that, you know, finding working prototypes or examples that you like, that you like and then you know doing analysis on the files it is much quicker and easier than reading through and scrolling through a bunch of documentation so this is an example of a great demo that was created by Hugging Face and the Google Palogema team and it's got some really powerful stuff in terms of vision analysis, but certain aspects of it were a little bit more challenging, but code from scratch. And so I will have a clear so we just do this. I'll switch to something else. I'll tell it ask it to detect a bee. So what it's going to do, it's going to take this image, and then in terms of this, what it's doing here is it's identifying and it's going to use a bounding box to indicate the location. So it's still thinking, but yeah. So it's still thinking, but yeah. So here, what it's done, it's actually using the Halley-Gemmell model and another model to sort of overlay this bounding line. And in terms of, you know, being able to segment cats, it's kind of, it's similar. being able to segment cats it's kind of it's similar they take a different method of sort of highlighting and segmenting the cats but in terms of getting quickly to the point of understanding how that's done and the code and the models to do it, you know, that's, that gets to, you know, the key sort of scenario. Like, so what I did here, you know, the basis for some of the examples here is I took this and I cut and pasted it into the app so that it's no fun if you don't see me sort of find it. I mean, it runs so quickly. It's not even using a GPU to do this. But it's going out and it's pulling back the content. And what I have here, this is the app.py. So this is the main sort of logic. It's the Gradio app that's used here. But what you're able to do is you're able to take these examples, these prototypes that others have created, understand the approach that they've used, and then get to understanding the, some of the underlying secondary models used. Because in this case for being able to do the bounding boxes being able to do the sort of the overlay that's used for segment it's all it's here so i'm not going to go into the details but it's more just wanted to kind of go into the use case that you know there is such brilliant and insightful work that were went into building this meaningful demo and you know you can take these lessons learned and you know leverage it to build you know your excellent use cases so huge shout out to a hugging face and the google palette gemma team for sharing this the other quick thing i'll show here is that on the model explorer page here, I've sort of documented some of the things that were so useful for me on my learning journey. And one is the link to the Hugging Face overview of the LM. So that's what I showed right here and then the other is a blog post on ala Gemma that kind of goes in based on use cases you know how they they went about their approach and with the various use cases are meant to address and help address. So my last shout out is to, you know, the Google Model Explorer. This is something that I ran across at the time that I was trying to do the analysis. I couldn't get it to work for the vision language models just based on how they do the model fusion of the, you know, the vision encoder and the language model decoder. But it did work really well for, you know, large language models directly. So anyway, definitely worth following and paying attention to, you know, the evolution of this tool. I think that they, they will do it very well. It's a completely different use case that I had in mind when, um, I created my tools like the model Explorer and the Hugging Face Extractor for discovery analysis. So with that said, I'd like to thank you for your time and enjoy the rest of your day.
Unlocking the Mystery of Open Source VLMs: Accelerate Your Prototyping with Model Explorer
996
Don Branson
20240606
In this exciting video, I dive deep into the world of Vision Language Models (VLMs) and unveil two innovative applications designed to supercharge your initial analysis and prototyping process. 🚀 🔍 Application Highlights: Model Explorer: Watch as I showcase the Model Explorer, built from scratch using the powerful Transformers library. This tool allows you to quickly introspect and reverse engineer VLM architectures, offering insights into the implementation choices at each layer—from image encoders to interface/projection and text decoders. 🌐 HF Extractor: Discover the magic of the HF Extractor, a simple but effective tool for extracting the "secret sauce" from top-tier Hugging Face spaces. With this tool, you can enhance your own spaces by leveraging the innovative techniques and configurations used by eminent creators. 🧪 🔧 Why Watch? - Gain an in-depth understanding of key VLM architectures. - Learn how to accelerate your VLM prototyping and analysis. - Get inspired by the innovative approaches and insights shared. - Don't forget to give a shoutout to the amazing creators who inspire you! 🎉 - Join me on this journey of discovery and innovation. Whether you're a seasoned ML engineer or just starting out, these building blocks will revolutionize the way you approach VLMs. 👉 Watch now and take your Vision Language Model projects to the next level! Chapters 00:00 Introduction 01:40 High-level Architecture 02:17 Model Explorer Walkthrough 05:00 Model Explorer - quick deep dive into the code 08:37 HF Extractor - Walkthrough 10:49 HF Extractor - using it to understand the building blocks of the PaliGemma demo space 15:27 Google Model Explorer
2024-06-10T18:33:50.401210
https://www.youtube.com/watch?v=anIBtQNn1G0&ab_channel=AIMakerspace
Hey, Prompt, what would you say is the open source edge of large language modeling today? Well, Dr. Gregg, I would probably say it's got to be Lama 3. Hmm. And Wiz, what would you say is the open source edge of language? What would you say is the open source edge of language? I'm probably going to say Gen Z slang kind of conversation. The young kids got us covered, right? And that new, new LLM. Well, what do you guys say we combine these two edges of language and language modeling today? I mean, we got to link up, vibe well, and make a dope team today on Llama 3 and Hugging Face. Right, boys? Sound like a plan? For real. Come in, Greg. Prompt Engineering, Wiz. Today, we build, ship, and share together. We'll see you boys back in just a minute. My name is Dr. Greg. That was the Wiz. And our new friend, yet to be revealed, Prompt Engineering is joining us today. You may know him from YouTube or his other work on open source projects like local GPT. Today, we talk about how to do end-to-end prototyping with Lama 3 based on how to actually understand Gen Z language. We'll walk you through everything you need from data to fine-tuning to deploying a scalable endpoint to putting a UI and serving it up so that others can use it publicly. We're going to discuss some of the intricacies along the way, and we'll have a lot of fun exploring Gen Z philosophy as well. If you have questions along the way, whether for us or for Prompt Engineering, please drop them in the Slido link, or alternatively, go ahead and smash the chat. We will be getting a little bit more interactive today as well as we try to build something truly scalable we want you guys to help stress test it so let's go ahead and jump right in to end to end prototyping with hugging face and llama 3 brought to you by ai makerspace and prompt engineering as i mentioned as we align our aim for the day we want to build ship and share something that's really end to end. We want to do it with a scalable endpoint. We want to understand what that means exactly and how we're sort of hitting this nice in-between space when we leverage Hugging Face directly to do this. We want to also take a look at fine-tuning Lama 3 and understand how, in general, we can kind of leverage these open-source models through endpoints directly through Hugging Face. We want to sort of put this in context as to, you know, the question everybody has, but why Hugging Face versus AWS or Azure or GCP? Shouldn't I be using one of those instead? It's a great question and it's worth some discussion today. So what we'll do is we'll walk through end-to-end prototyping, fine tuning. The focus is gonna be on this scalable end points piece today. We'll show you how to put a UI on it, deploy it, and finally we'll do some Q&A. So we're going to build an end-to-end prototype today. And we're going to try to see if we can't really build an AI that What's going on at the open source edge of language? Oof, for instance. Definition, used to express discomfort, surprise, dismay, or sympathy for someone else's pain. The sound oof has been used when a player dies in a video game since the early 2000s. Gen Z, oof has been used when a player dies in a video game since the early 2000s. Gen Z, oof. You've probably heard it if you've got a teenager at home. But how can we learn maybe a little bit deeper about how to leverage this type of language and how can we use AI to help us. Take Plato, for instance. Courage is knowing what not to fear. Say what? It's all about being aware of the things that could go down, but not letting that stop you from living your life. It's like, go down but not letting that stop you from living your life it's like you know what could happen but you're not gonna let that fear hold you back from chasing your dreams and living your best life you're gonna face it head on you're gonna come out on top that's what courage is all about, bro. Gen Z has so much to teach us. So much. And we can see how we can leverage AI to do this and to translate for us with our application today. Let's go ahead and prompt it with our friend, Prompt Engineering. Show us how it works, man. All right. This is what we're going to be building today. It's going to be a scalable front end, and let's prompt it. So I'm going to be using a quote from Aristotle. Excellence is never an accident. It's always the result of high intention, sincere efforts, and intelligent execution. It represents the wise choices of many alternatives. Choice, not chance, determines your destiny. So let's see what Gen Z is going to say about this. All right, so endpoint is running. I'm going to keep it real. Excellence ain't no coincidence, fam. It's always the outcome of high key intentions, grinding hard and making smart moves. It's a smart choice between many options. Not just luck that determines your path in life between many options, not just luck, that determines your path in life. All right, back to you, Greg. Oh, dropping the wisdom, fam. What I want to do is I want to ask everybody in the audience now, because as Prompt mentioned, this is a truly scalable endpoint. Isn't that right, Prompt? Yep, yep. Let's test it out let's test it out man let's test it out everybody if you can go ahead and we're gonna drop the end point and the space we're gonna drop the space into the chat go ahead and smash that end point thanks so much prompt we'll be back to you in just a little bit. And don't forget, smash that endpoint, everybody. Smash it. Let's see how many requests we can get this thing. See if you can overload it. I bet you can't. Write a script to see if you can overload it. We don't care. Let's see how much action we can get on this thing. In the meantime, let's talk a little bit about prototyping and put this all in context. When we prototype, we're generally going through a few stages. We're generally going through prompt engineering. How about that? Then we're often moving into RAG and doing RAG straight away before looking at fine tuning and ultimately trying to think about some sort of agentic reasoning behavior. And the way that looks is sort of along two axes is you can take prompt engineering and you can try to optimize what you're putting into the context window by doing RAG you can also take prompt engineering and sort of optimize the way the LLM is behaving the input and output you're getting from the LLM by doing fine tuning this is sort of moving the prompt into the model a little bit and kind of doing some rewiring based on many examples that you might put in a prompt in a one-shot two-shot few-shot sense generally you need both of these things potentially fine-tuning both an embedding model and a chat model for building more complex systems and of course you can use agentic reasoning for all of these things as well what we're going to do today in the interest of time and doing something that's truly end-to-end is we're going to just focus on fine-tuning. And the reason for that is because it's just a little bit too much to try to do RAG and fine-tuning and really focus in on endpoints today. So we're going to go through our process aligned with our ethos here at AI Makerspace of building, shipping, and sharing. Our build is going to be about curating data, in fact, creating from some of the data that we curated a data set that's going to serve our purposes. Then we're gonna train the latest model. We're going to ship, we're gonna actually deploy this thing to an end point. We're gonna serve it up so inference can be done on it at scale. And then we want this thing to be shareable to others, not just locally, but publicly. So we want to put a UI on it and make it public. As we think about building, what are we doing? Well, we want the data and the model to really play off of one another. And of course, the model doesn't really care what the data is, Of course, the model doesn't really care what the data is, but the data really does matter. We're going to use data sets and models on Hugging Face to push these things up to the hub. If you've been watching our channel so far, you've probably seen this before. We're going to give about 10 minutes to fine-tuning LLMA3 today to show you how this is done. When it comes to shipping, we're going to deploy our fine-tuned LLM model to an endpoint. We could also do this for embedding models if we were building, let's say, a RAG system, but not today in the interest of time, once again. But not today in the interest of time, once again. And as we think about sharing, the UI is oh so important. It's that chat GPT style chat interface that people want. And that you want to give your users and stakeholders. So there's a really easy way to do it. We're going to show you how to do a chainlet today. And also when it comes to deployment for a public URL, so easy to use Hugging Face Spaces to do this. One click deploy to a container and you're on your way. When we talk about kind of the LLM stack and all the things that are part of it, there's a lot of stuff on this slide. And the text is really small. part of it. There's a lot of stuff on this slide and the text is really small, but we can kind of walk through at a high level the fact that we need to do like data pre-processing and embedding. This is particularly important in RAG systems. We need to do prompt construction, retrieval, optimization, and that's not even to mention everything that's really kind of truly on the back end of the execution and inference and the serving up of the models that needs to happen. The reason I wanted to show you this diagram today is to show you just how simple the application is that we're making today. Of all of these pieces in the emerging LLM app stack, we're only going to use a couple of them. We're going to use open source models deployed directly through Hugging Face, actually via AWS SageMaker, but through Hugging Face nonetheless. We're going to use the OpenAI Python software development kit to actually glue this stuff together in the middle. We're going to use the OpenAI Python software development kit to actually glue this stuff together in the middle. We could use langchain, we could just straight build this thing out. A number of ways. The reason we're using the OpenAI SDK is because it works really well and it's pretty easy to implement and it's overall a good choice. pretty easy to implement and it's overall a good choice. And of course, Chainlit allows us to not only take input from the user, but as we saw, give output to the user. And with that, we're ready to get into the fine tuning piece. Remember, fine tuning is about moving from prompt engineering into a place where you have more examples. You're optimizing the LLM's behavior. You're telling the LLM how it should act. Now, there's three types of fine tuning that we generally are going to be leveraging. These are not mutually exclusive, but they all matter. Training the behavior for a particular kind of response. That's not really what we're doing today. Constraining the input output schema. We're not really doing that today. We're doing that kind of a little bit. We don't want it to be extremely verbose coming out, but most of all, we're doing a little bit of language training. We're training the model to interpret new words much better. And this language training is the focus of today. If you want to know more about the other types of fine tuning, check out the practical fine tuning event that we did not too long ago. When it comes to tactical, what we're doing, we're doing something called PEFT, quantized LoRa fine tuning, parameter efficient fine tuning using a quantized low rank adaptation approach. All that is to say, we're modifying that behavior by updating not all the parameters, but the parameters that we need to update in the attention layer based on the LoRa approach. And we're using a quantized representation of those parameters, those weights to do so. Now, if you want to know more about PEFT, you want to know more about LoRa, you want to know more about quantization, check out a couple of events that we recently did on PEFT, LoRa, and QLoRa. What we did today is we went and we took the Gen Z dataset, commonly used Gen Z slang, and we combined that with the power of GPT-4. So we gave it sort of a dictionary to provide ourselves with the sort of input-output pairs that were more of translations, actually, than question-answers in this case. And we used that to fine tune the LLM. Of course, the model that we fine tuned used these pairs and we leveraged off the shelf, Lama3, 8 billion, Instructuned because of course, Instruct tuned is always the move when you're pulling open source off the shelf. So with that, let's check out exactly how fine tuning was done, how we push data and model to the hub and how we use them. Wiz, show us how it's done, man. Oh, yeah. Okay. So as Greg said, basically we started with a list of data or terms, right? And then we turn that list of terms into this very basic training set using, you know, GBT 3.5 Turbo, sorry, GBT 4. And so we get things like, it seems like he's not interested in maintaining a serious relationship. Looks like he's not about cuffing for real. He just wants to keep it casual. This is the idea, right? We built a data set that we could use to fine tune our model. Then we fine tune our model. So this is the notebook that we use to do this. That's going to be shared with you guys in the chat. And the idea is it's pretty much what you're used to, right? So we're going to use transformers, TRL, as well as some supplementary libraries to train this. We're going to use this data set that we generated. You can see that it is posted on the Hugging Face hub. If for whatever reason you wanted a Gen Z English data set, feel free to go for it. So we load that up. We've got 105 rows, which is very little data. the English data set, you know, feel free to go for it. So we load that up. We've got 105 rows, which is very little data. And then we're going to use, you know, see that it's got the English sentence and it's got the Gen Z translation. Then we're going to create a prompt. The prompt is basically going to be, this is the Lama 3 instruction template prompt. The prompt is basically going to be, this is the Lama3 instruction template prompt. So we have our system message, which is contained in the system header. We're going to have the system message just be GenZify. And then we're going to end our system header and we're going to start our user header. And then we're going to pass in the natural language. And then we're going to close our user header. And we're going to start our assistant header. And then we're going to expect there to be some kind of translation. And so this is what we see. When we build this prompt, you'll notice we just, you know, take away the actual natural language. Same with the response. We create a create instruction helper function, which is going to turn rows of our data set into this, into the format we expect, which is this prompt. And we can see that in action here. When we create an instruction, we get our beginning of text start header for system, gen ZFI, start header for user. That was really funny. That's a natural language description, right? Or a natural language sentence. And then start header for assistant. I'm weak. End of text. There you go. Now we're going to load our model. Like Greg said, we're going to load this in four-bit quantization. So the idea is that we have this set up. And the basic idea, I'm just going to go ahead and make sure that this is shareable for you guys, and then I'll drop it in the comments. But the basic idea is what we do is log into Hugging Face. We need to do this because Llama 3 is a gated model. So what that means is that you can't use it if you don't basically ask meta. So you will have to log into this notebook and you will have to request access from Huggyface. There are other models available by certain, say, new research teams that will don't require the gating. We also have the ability to basically load this in 4-bit i'm not going to go into that here as greg said uh there you go and then we're going to set up our tokenizer we have to do this uh tokenizer swap because we're going to fine tune uh just part of the part of the thing so we we're going to add a padding token we're going to set as our end of sequence token, which is not typically something that we would want to do, but in this specific use case, because we're going to be using this padding technique later, we can get away with it. Again, we go into more detail in other events about that. We're then going to see, like, how does the base model perform, right? So we say things like they're in a very complicated romantic relationship, and the model says the drama. So they're in a complicated, huh? That's totally relatable. The gen-C-ification of relationships is all about the messy. And it's not quite what we want, right? So we can say, you know, things like she looks very attractive. And we get a response like you're saying she's got that gen-Z-ified glow going on, which might be gen-C language, but I feel like they're not that self-referential. So the model's like kind of on the line here. It's not doing a bad job. It's not doing a great job. Okay, okay, okay. But, you know, we want to make it better. So what are we going to do? Well, we're going to use LoRa. Again, go into that more in another event. And then we're going to finally do our fine tuning. Got a lot of text here that walks you through what the hyperparameters are doing, why we've chosen some of them. And the basic idea here is that we're going to train this thing for 10 epochs, which is going to probably overfit, right? So it's going to be pretty well overfit. But that's okay. You know, we're okay with that. This is just a translation task. We're going to do this low batch size. This is going to keep our GPU RAM pretty low. The reason it's so high is because we've actually loaded the model again, but you can actually get away with training this thing on a T4 instance, which is the free instance, which is pretty good. Then we're going to go ahead and we're going to set up our TRL trainer, which is our SFT trainer, Supervised Fine Tuning Trainer. We're going to finally train this model. 10 epochs, right? So we're overfitting for sure. We can see that our training loss gets very low. We're definitely overfitting, but it's fine. And then we can save this model. We're going to free up some memory so we can reload our model. This is going to help you be able to do this in the T4 instance. And then we're going to go ahead and we're going to push this to the hub at this address, right? So that we can load it on our inference endpoint later, which Prompt is going to take you through. The idea here is that we want to be able to load this model as an inference endpoint, so it has to exist on the hub in order to be able to do that, at least very quickly. You can certainly find more hoops to jump through, but the quick integration is available that way. And then of course, we're gonna go ahead and we're going to add that translation tokenizer as well. So if the tokenizer is not present in your hub space, it will not, the inference endpoint just won't work. Okay. Then we're going to load this into another pipeline, this time with our merged model, which is our fine-tuned model. And we're going to see how it went. We can say things like the greatest wealth is to live with little right and we get the response the most valuable flex is being low-key rich in your own mind much better than original right like definitely i'm not a gen z scholar of language right but this is this feels more succinct at least and it's doing what we want it to do and then we can say this this next one i was born not knowing and have had only a little time to change that here and there. And we get the classic. I was a noob and I've only had a hot sec to upgrade that. Absolutely fantastic. And that's all we have to do. So now that we fine tuned our model and we have that fine tuned model on the hub and we've kind of verified that at least for these cases, it works. And keep in mind that these are outside of the training set, right? We didn't train it on anything like this. These are just quotes from random philosophers and physicists. So, you know, it seems to be doing well. So we can move on to the next steps, for which I will pass you back to Greg. Yes, nailing it. Okay, learning so much here. Let's talk about the end points, okay? Now, just to contextualize, let's make sure that we're all clear on the language here. We want to be serving models so that we can do inference. All right. And the idea of Hugging Face Inference Endpoints is that it solves the problem that you need a GPU to run inference. And this makes it really easy to scale this out. Of course, you can host a single application on a single GPU on Hugging Face relatively easily. easily. What you can do with this inference endpoints is you can sort of have a little bit more control over exactly what's going on with the GPUs needed to run inference. And this idea of control is going to be a central theme of our discussion today. The way to think about the solution that Hugging Face Inference Endpoints provides is it's an easy way. I have a model. Now I have a model that's on Endpoints with a GPU. And I kind of control it. And so you can run this in production environments, like as an enterprise solution, it's possible. Now, should you? Well, it depends. Now, how's this happening? How is it happening? Well, it's happening because under the hood, we're using TGI, text generation inference from Hugging Face.. Now we don't have time to go into the math or details of TGI. If you want to see that at some point, let us know in the feedback and in the chat that we should do an event on it. But TGI is basically allowing us to do inference fast. So many tokens, really fast, basically allowing us to do inference fast. So many tokens, really fast. And there are lots of tricks that it employs to do this. Again, we're not going to talk to you about the details of these tricks for speeding up inference, but just know that it works dang well. I want to bring Prompt and the Wiz up to the stage here to ask like the question first, to ask like the question first, for a little discussion, Sash here, guys. A lot of people feel like if you're not using AWS, GCP, or Azure, that you're not actually building production LLM applications. First of all, is this true? What's your opinion prompt? Definitely not. There are a whole bunch of other options. And actually like if you're building on AWS or GCP, you will need to set up a whole infrastructure, right? But there are better options which will take care of the infrastructure part for you. Okay. Okay. So when you think of, you know, other options, what do you think are the reasons why people might choose something that's not AWS or GCP or Azure? So imagine you are a single person startup. Okay. You don't have a dedicated infrastructure team and you want to deploy LLMs at scale, right? So you can use these TGI inference from Hugging Face, for example, right? Hugging Face is one of them. In that case, Hugging Face is taking care of all the infrastructure for you. You just need to deploy an endpoint with the dedicated resources and that's it. Wiz, if you were a one person startup, would you use Hugging Face inference endpoints? I would use anything that makes my life easier, which does include inference endpoints. Yeah. The idea is when you're starting and you don't have a team and the had to then go and figure out how to either just use hugging face inference endpoints or i had to go like figure out how to use an entire cloud computing platform it's like oh man i'm taking hug and face inference endpoints all day now if i had the wiz or prompt engineering on my team, I might consider doing something a little more sophisticated. And this sophistication, I think is the next key point guys. I think this idea of control that we brought up earlier is very important here. I wonder if you can comment on this prompt. You know, my understanding, and maybe you can correct me or verify this, is that if you use a cloud computing platform, you have lots and lots and lots of control, like let's say all the control. But if you use something that is a little bit more managed, that is a little bit easier, that is a little bit less things you can change, like hugging face inference endpoints, that you're sort of just making this trade-off. Is that right? Is that the right way to think about it, Prompt? Yeah, I totally agree. I totally agree, right? Like in the beginning, if you're just applying something, you don't really want to worry about a lot of optimization. Once you have something up and running, yeah, you definitely want to move to something where you have a lot more control, do all the optimization that you need, right? So that makes sense. Yeah. Yeah. And Wiz, if you were going to optimize something after you had it kind of deployed, what would you be looking to even optimize? I don't even exactly understand this from a non-scale engineering perspective. There's a lot of costs that you incur by using these kinds of managed services, especially kind of these ones that are intended to be more simple. You know, it's always on, it's always available. The actual replica scaling is not that great. You don't have any real decision-making power over how many instances you have or how many you need. I would say, ultimately, you're missing a lot of that fine-tuned control that can really help you optimize cost. So it's going to be high availability. You're going to be able to make sure that it stays up most of the time. So you'll get all of that greatness, but it's going to cost a lot. You're not going to have a lot of fine grain control over reducing that cost as you begin to learn the habits of your customers. And as you begin to use, learn the habits of your business, you're just, you're going to be missing out. Okay. So the idea is you'd want to get something up and running. You'd want to take a look at it. You'd want to then assess like what's going on with this data, see how people are using it. And then as you optimize, you're really optimizing to serve up the model at a speed and at a cost to you that makes sense for your business. That's really what we're talking about at the end of the day, right? Yeah. The idea is like you have more levers, right? If you have more levers, you can get closer to your desired outcome than if you have fewer levers, right? So it's, yeah. Yeah. I think an analogy would be look at no-code tools versus writing your own code. In no-code tools, you are limited, but it's a lot easier to use in the beginning. And if you write your own code, that will give you a lot more control. Yeah. You know, Pram, if somebody's starting out there i mean would you generally recommend that people start with no code low code tools from the jump or would you you like, would it be obvious to people out there? Like, if they were going to use high code tools, that that's a good idea. I think a lot of people have this sort of fear of like, I don't know how to code, I can't build it, or I don't know how to code enough. So I need to keep learning how to code more before I can actually build the thing like what's your what's your advice to folks that you know don't really want to pick up a no code tool but you know potentially might be able to benefit and do things a little bit faster even if it's a little more expensive and not as not as cool necessarily in the meantime. Yeah, like the way I think about it is if you're not a coder and you need or you want to ship fast, right? It's actually really good to adopt these no-code tools in the beginning, right? You ship a product or idea and you validate it, right? And then comes the second part of optimizing it, right? So for that, you definitely want to, let's say, either learn yourself or hire engineers who can actually do it for you. Yeah, yeah. Yeah. And I guess that's sort of the way that we'll kind of wrap this little discussion up is that's kind of the best way from talking to you guys that I've come to think about this is we're almost kind of hiring Hugging Face as a partial engineer for us a little bit here. And we're sort of saying, we're going to give you a little bit extra money, not as much money as I would need to spend necessarily to hire an engineer to manage AWS for me, but I'm going to kind of hire you as a bit of an extension of my team to manage this a little bit more for me. But I am still going to be able to scale this out if I can find that product market fit, really with the enterprise solution as much as I want. And that's hugging face in print in front of 10 points in a nutshell, as far as I understand it. Do you have anything to add to that whiz? Absolutely not. I think that's, you guys covered it so well. At the beginning, right, you have money and no time. And at the end, you have money and time, hopefully, if you're doing it right. And when you don't have time, no low code is a multiplier. So, yeah. Awesome. Awesome. Well, we're going end to end today so let's get right back into the end to end action and before we go to our next piece of this we're going to see how we can ship now that we've built and for that we go to prompt engineering shows how to set this up now man all right so assuming everything goes well with the training and you are able to push your model to help so you can see something like this so this is the second version of the model that was trained. So I'm going to show you how to do the deployment. So this is a public model, so you can actually access it. And in order to deploy it, there are two ways. Either you go here and select Infos Endpoints. This will basically take you to a new endpoint that we where you can create click on uh new that will start the process of creating a new endpoint right so we actually need to look for uh that model that was trained you can look it up here or if you have access to the model that you just uh deployed you can click here on deploy, click on inference endpoints, right, and it will automatically populate the model repo for you. Okay, next, you need to provide the endpoint name. You can call it whatever you want. I will just stick to the default name that HuggingFace selected for me. All right, now this is a big model, you can call it whatever you want i will just stick to the default name that hugging face selected for me all right now this is a big model so we will need a powerful gpu to run this the good thing about uh the inference endpoint from hugging faces it actually gives you a suggestion or recommendation uh which gpu to use depending on the model that you selected and there are multiple providers so we're going to stick to aws we'll select gpu and we're going to go with the a10 gpu uh there's an option of automatic scale to zero so let's say uh there is no traffic on uh your endpoint for a while so you can actually select it uh to go to or go to basically sleep right so if I say after 15 minutes with no activity then it's going to go to sleep that will save you some money then you can also include options like protected so in this case in order to access this your user will need a hugging face token or you can make it public right we'll just keep it protected now you want to also look at some other options for example if you do auto scaling then you need to provide the maximum number of uh replicas that you want and the uh minimum number so the minimum minimum is one and depending on your needs you can auto scale it to whatever you want the task is going to be text generation we're going to keep everything else uh the same the only uh thing that we're going to be updating is the quantization so we're going to use bits and bytes to run a quantized version of the model rather than the full precision model and that will help us uh basically increase the entrance speed as well okay so once you go through all these options just make sure to select like the task is text generation right and select the appropriate quantization we did use the lot of doctors so that's why we're using the bits and bytes for quantization and uh you just hit create and this will start the endpoint creation process now this is going to take a few minutes so just bear with it and everything if goes well you will be provided with endpoint url here that you can use in your code so let me show you how you can do that within Python. Okay, so here is a notebook that we're going to be using. So first you need to provide your Hugging Face token because we created that as a protected endpoint. Okay the this is the url that you will get once the deployment is complete we are going to be doing async calls right so we have this very long text that is going to go as a query or prompt okay Then this function is the one that is actually making the API calls. So you provide your API URL along with the query that you're going to be passing on. Then there is this helpful function, which is going to make 500 calls to the same API endpoint. So we really want to stress test this and show you that it's actually able to make a lot of calls to this scalable endpoint. And it will hold. So that's why we're using this, right. And we wrap around everything within this async main function. So when we run this, it will start creating or start making a whole bunch of calls to the endpoint and we're going to get responses. All right. So this is how you quickly deploy and test the model. I will pass it on to the Wiz. Yes. Okay. So now that we've done this, let's take a peek at something that is pretty cool. So we can actually see our dashboard here. Now, if you notice this dashboard, right, you'll see that there's, there's been a lot of activity, you know, since we've started the endpoint over the last 30 minutes. So you can see here that we have the testing that we did where our requests was getting very high. And this is basically just showing the model chugging through all those requests. The basic idea is, you know, I'll extend this back out to the last three hours now so we can see the spikes. You can see the different times we test it. You can see that there is no, You can see the different times we test it. You can see that there is no, there's not a lot of incidences of 400 requests. It looks like we did have a few. Maybe someone was trying to programmatically access the endpoint without the API key. But the idea is that we're servicing a lot of requests without crashing, right? And we're not seeing any 500s, which is awesome. The downside, since we're not scaling, so we're only using one replica here, and you can see here that when we look in our configuration and we look at the, or sorry, when we look at our settings, we can see that we're not doing any auto scaling, right? So when we look at our settings, we can see that we're not, we're not doing any auto scaling, right? So when we have loads of requests, so if I just go back to that two hour window, right, we get these, these spikes in latency, right? So this is how that, that auto scaling can help us out because it can help us to better ensure that we have fewer spikes when we start sustaining a lot of traffic. There's going to be some lag time, of course, with the auto scaling, but it's not so bad. And you can see that like the median request latency is not so bad. Now, in the front end that we are using, you might see that it feels slow. That's because we're not using the streaming use case. And so it waits for the prompt to be fully formed before sending it back but the time to first token is not very long you can also see that under load we're using a lot of our card uh and uh you know very little of the rest of our machine but this is the idea right we can hammer this thing with those 500 requests at a time and it chugs along all right. So that's the idea. And that's how we can track the usage for us. We can also see our specific usage, how much time it's been up, you know, what our rate is and how much cost we've incurred. So for this, since we opened it before this event, it's been about two dollars and by the end of the event it will have been two dollars uh that we've spent to do this so not so bad uh but with this i'm going to pass you guys back to greg who will uh take us to qa but before i do i'll remember to uh say don't forget to like comment subscribe, subscribe, and ring the bell notification. We go live every Wednesday. We love to see you out here, and we'll switch you over now. Yes, thank you, Wiz. That was awesome to see. And we are actually going to do the final leg of our end-to-end prototyping next which constitutes the ui show you a little bit of what's going on behind the scenes with chainlit and for those of you that are wondering well chain, Chainlit or Streamlit, it's like, those are stupid UIs. Those aren't scalable, right? Well, I wouldn't be so sure. I was talking with Dan, the co-founder and CEO of Chainlit, not that long ago. And he told me that we had someone deploy an app that had 1 million users per week, Chainlit. So get some, Chainlit. That's pretty cool. And I certainly have heard, I don't have any verified quotes, but I've heard that Streamlit doesn't do too bad as you scale up either. It's like, really, what scale are you servicing? And when does this break? In the meantime, why not use it as a sort of member of your team instead of hiring a UI expert and designer? It's also got a really simple syntax that you can start the chat, you can keep the chat going, and that's pretty cool. Once we have a chainlet front end, we can deploy it to Hugging Face's spaces. We've talked about this before. We can one-click deploy. You don't have to have a GPU to deploy a space, but you can go ahead and attach GPUs super easily and even via hugging face inference endpoints to close us out we're going to go back to prompt engineering to show us the sharing part the oh so important sharing part of end-to-end prototyping as we get our work in front of others who are non-technical prompt over to you man okay all right so we're going to talk about how to quickly create this front end. So as Greg was saying, Chainlit is pretty amazing. It's very simple, straightforward. So here, you first need to provide your prompt templates exactly the same what we used during the fine-tuning process. In this case, you want to make sure that you stick to the LamaTree's prompt template so that it actually generates good responses. Then you're going to provide your API URL. So the same thing that I showed you before, right? Some headers for authorization, okay? Then pass on our query. This is the prompt template, which the user input is going to come in. That is going to be formatted. And we'll create a query based on that. And we will create a simple inbox where we'll get the input from the user and pass it on to our API to generate a response. Okay. So this is how it's going to look like in terms of the final UI that is deployed on a Hugging Face basis. And we can test this out. So let's say we're going to test it out again. Now you just want to make sure that this is running. We can pass this. All right. This would take some time, but we do get a response. All right. So this is how you deploy it. Back to you, Greg. Awesome. Yeah. Easy, peasy, lemon squeezy, quick deploy. Awesome to see that. As we sort of think about and reflect on the session here a little bit, we can say, okay, what did we learn? Well, we got pretty existential today and we learned that the greatest wealth is to live content with little. In other words, the most valuable flex is being low-key rich in your own mind. We also learned that being born not knowing and having little time to change is the nature of our existence. In other words, we're all noobs and we only have a hot sec to upgrade that. We only had a hot sec to upgrade our end-to-end prototyping game today, and we did it through the lens of building, shipping, and sharing. We hope you enjoyed each piece. Of course, we've got lots of content that digs deeper into all of these things. The star of the show today was API endpoints that solved the problem of running inference on GPU. Cloud computing not required, but you can use it. And this gets us right into Q&A. We've got a few questions in this regard. I welcome Prompt and Wiz back up to the stage here. And I want to sort of focus in on that piece of it first here. Aren't there inference endpoints in AWS through Bedrock? Wiz, like, duh, why didn't we just use that? I mean, there sure is. Yeah, there is inference endpoints through everybody. You know, and in fact, if you look very closely, the inference endpoints that we're using today are secretly going through AWS anyway. You know, it's just click the Hugging Face UI to get them. The idea is pretty straightforward. Like the Hugging Face ecosystem is set up to hold your model, your data, and it can also serve as the inference solution for that model. So that's why we went with Hugging Face. Now, you can, of course, use AWS, Azure, whatever your favorite cloud provider is, if you want to. But we wanted to show a simple, few-click way to get the same thing without leaving the ecosystem you're training your model in. Okay, all right. Keep questions coming, everybody. Prompt taking this one over to you. You've done a lot of fine-tuning on your channel Can you share tips about how to decide the amount of data for effective fine-tuning? of any given LLM Okay, this is actually a very controversial question So some of the research that I've seen is that you could um so so the some of the research that i've seen is like you could get a really good fine tune out of the smaller models up to thousand examples but those thousand examples have to be extremely good right you can't just uh like or the traditional terms were like garbage and garbage outright so if you have a million examples of bad data uh it's not going to learn. But if you have some good examples, up to, I think, a thousand examples, they seem to work pretty good, at least for the smaller models. And I think, gosh, what were you saying the other day? You were talking about fine tuning to us and you were talking about how the one point, the one in 3 billion models, they even sometimes they're suffering if you do too much quantization on their performance too. Isn't that, you said something along those lines. Yeah. Actually, like there's a couple of things. That's a good point. When you fine tune a model, a lot of people use quantized version of those because that gives you better inference speed and memory usage is pretty small, right? Now, the problem is if you have a smaller model, like, for example, 5.3, that is going to suffer a lot more if you quantize it heavy. So if you use like eight bit quantization versus four bit quantization, you would see a drastic change in the performance. Same whole tools for you in the bigger models as well. If you quantize bigger model, like a 16 bit, eight bit, four bit, you will see a difference in performance, but for the smaller models, it's a lot more prominent. Fine tuning legend over here, Prompt. Yeah, follow at Prompt Engineering on YouTube for more fine tuning tips. Wiz, to you on this one from Jorge, how much do you expect this setup's inference to cost? We saw it costed $2. Can we talk about this on a per token basis how should we be thinking about the cost model here that we're using yeah i mean for how you face it for some points it's easy uh if the model is going to be up for four hours it's going to cost you four times the hourly cost if it's uh going to be up 24 hours a day do the math if you want to think about it in per token costs you can think about it by uh thinking about how many tokens you expect to have through that period of time but it's it's a hourly or per unit time cost so uh in in this case i would i would more focus on that uh when it comes to like the replica and the auto scaling that is where it's going to be a little bit more complicated. In that case, I would think about doing the math to find it per token cost. And that's going to help you decide. Let's say you receive enough requests that you're going to have four replicas. Well, that's going to cost you 4x. How much time should you sustain that for? Yada, yada. So it gets more complicated into there. For this specific one though, it's going to be a dollar per hour, regardless of how many requests you get. And is that per hourly rate, is that something that you would look at as you went to cost optimize as you started to grow? Is that one of the places you would start? Oh yeah. If we know, for instance, let's say we need more than one replica at a time to service our requests with decent latency, right? Then we want to find times during the day where our users are really slamming the endpoint and then spin up more instances then and spin them down when they're not. So there's a there's a lot of different ways you can handle this in order to keep a cap on cost at the at the cost of some latency for your users nice okay okay um uh fee slides are available we will drop them on youtube in the comments as usual and then prompt i you, we got this question in the chat here from Edgar. It seems like it just will not go away. What do you think about the value of doing a prototyping event within a cloud provider? Why do we avoid that today, in your opinion? Well, I think it's a lot more complicated it will not be able to cover those in an hour right uh to begin with and i think for beginners it's really good to have something up and running uh and then you you can think about uh these cloud providers like for a lot of companies for example right uh they don't really have even an option of which cloud provider to choose. They will stick to the existing cloud providers. So even let's say for companies on GCP, they're going to stick to GCP, even if AWS is offering better prices, because usually they have long-term agreements in there. So to begin with, I would say just use something like this which is pretty easy to set up but once you pass a certain point where you want to look into optimization and stuff like that, then yeah, you can explore all these things and there are some really good tutorials out there. I want to kind of double click in on this as we end. Thank you everybody for the questions and we'll do our best to get the ones that are relevant to fine-tuning over to Prompt to answer async and sort of make sure that we get answers to some of the more interesting questions here. But one of the things Wiz and I talk about Prompt a lot is like the actual job of an AI engineer and where it stops and what it's not. And I think a lot of times when folks are asking for the cloud computing platform content, it seems that they're asking for the content that's going to be stuff that they're not typically doing in their day job as an AI engineer. Correct me if I'm wrong on this contextualization, Wiz, but this is the core idea that we're pushing. What are your thoughts on this? Where does the AI engineer job description and job duties, where does it run out? Is it at the cloud computing platform? Is it well within the cloud computing platform? How much cloud computing platform do people need to really know? So I think about this question, like, let's say a couple of years ago, people would think about data scientists. Okay, so data scientist role was really dependent on which company you're working at. If you're working at a startup, a data scientist would be like an end to end engineer who would be like doing the data cleaning, data extraction, model training, then deployment, right? And if you are working at a big corporation, then data scientist is more of like, okay, I have to create models. So for AI engineer, I think we are going through a very similar phase, right? It's really good to know different, I think, technology stacks. It's good to know the deployment part as well, but depending on which organization you are working in, I think your role is going to look very different. Right? Yeah, so that that will be my answer. Like I know it's not a clear cut answer. But yeah, it's a hard question. Yeah, right. It is it is. And lots of people want to know the answer. And you know, and basically, I guess, you know, it's like, there's a reason why we don't teach, you know, AWS, or GCP, or Azure, because again, we don't teach you know AWS or GCP or Azure because again you don't get to pick it and if you want to learn it you know where to go you know you know where to go go ahead and just learn away on AWS on GCP on Azure with their materials they are the experts and their people that have full-time jobs dedicated to making you understand that ecosystem just to add to this like I think it's really important to know the concepts. Like for example, we covered deployment today, right? So if you know how to deploy a model or like what it takes to deploy a model, then what platform you choose is kind of irrelevant, right? You actually know, need to know the steps that you need to follow, right? And as you said, there are a lot of resources available depending on which platform to choose. Yeah, we just, we got to drive the car. We don't always have to build the car. As LEGO Block AI engineers. Well, thanks Prompt. Thanks Wiz. And that is the bookend of our end to end prototyping applications event today with Llama3 and Hugging Face special guests, Prompt Engineering. Don't forget to like and sub, ring that bell, baby. And if you enjoyed this, you want to keep building, shipping, and sharing with us, join the AIM community. We would love to have you. There's folks talking about projects that they're building, shipping, and sharing all the time. It's really exciting to see what's starting to take hold, led by community for community. Join us in there. And you can start learning for free with us. In addition to our YouTube, we've collected a cohort on LLM Ops that we taught in 2023. We're looking forward to sharing more cohort-based courses in an open-source way in the coming months. And if you are ready to accelerate right up to the edge, we always have our AI engineering bootcamp that's ongoing. It's a seven-week accelerated learning experience from your first LLM application to your first RAG, first agent, first fine-tuning, laying chain, Lama index, assessment, production, and demo day. We've got a demo day coming up. We'll invite all of you guys to join live and definitely check that out if you're interested. We've got lots of other cool stuff coming down the pipe, but for now we'll see you next Wednesday live on YouTube. In the meantime, keep building, shipping, and sharing, and we will most certainly, as well as Prompt Engineering, do the same. Thanks so much, everybody. See you next week. Bye.
End-to-end Prototyping with Llama 3
3,726
AI Makerspace
20240502
Join Dr. Greg, The Wiz, and Prompt Engineering for an exclusive YouTube event! Dive into the complete journey of building, shipping, and sharing AI applications with the Hugging Face Hub. Learn how to curate datasets, fine-tune models, and deploy them with robust API endpoints. Discover how to enhance your AI projects with user-friendly interfaces and finalize them for production using Docker containers. Whether you're new to AI or looking to sharpen your skills, this hands-on session will equip you with the knowledge to streamline your workflow and bring your AI solutions to life. Don't miss out—click to join us and transform your AI concepts into real-world applications! Event page: https://lu.ma/llmappshf Have a question for a speaker? Drop them here: https://app.sli.do/event/3VAruCSULjSYF2UT7BMoof Speakers: ​Prompt Engineering https://www.youtube.com/@engineerprompt Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/xm9MsAHV7oJsTxvT7
2024-06-10T08:50:55.711608
https://youtube.com/live/P7wfFiYSLsI
Hey Chris, Hey, Chris. So I heard there's maybe a way to do this RLHF thing a little bit better. Is that right? Yeah. I think there's a way that might involve less time and less humans. Yeah. Less humans. So you're saying that there's a way to use AI to make our AI better? Yes, I think that's what I'm saying, Greg. Sounds pretty legit. What do you say we take a close look at this next level of how to use AI to align our AIs to be cooler with us humans? I think that's a pretty awesome idea, Greg. Sounds dope, man. Let's go ahead and jump right in. We'll see you back in a bit to show us exactly how this thing is done. I'm Dr. Greg, that's the Wiz, and we are AI Makerspace. Thanks for taking the time to join us for today's event. Today we're talking alignment with RLAIF, also known in some circles as constitutional AI. We're going to learn how to actually take humans out of the loop for the most part in RL and in this next series on alignment. series on alignment. And if you have questions throughout the demo today, throughout the slides today, please do drop them in the Slido link. We should have time at the end to answer any and all questions related to alignment and RLAIF. Of course, Wiz will be back soon to lead demos, but we're going to go ahead and see if we can't break this thing down. RLAIF, there are a few tricky pieces to it that we want to get a handle on and root ourselves before we check out the code. So let's go ahead and get right into it today, everybody. Welcome in. We're talking alignment and what we want to do, of course, as always, is we want to align our aim. By the end of today, you're going to really understand that there's RLAIF, there's also constitutional AI, and you'll also really understand what is the same about RLAIF and RLHF, but also what's different. So let's go ahead and dig in. We're going to contextualize RLAIF through the lens of RLHF that we talked about last time. That'll sort of give us an idea of why we might want to do this thing. We are going to talk an AI constitutional AI approach, and we're going to really, really dig in on that. That's going to be what our demo consists of. We're going to see that the policy reward PPO framework is very much the same, but our demo is going to consist of the key pieces that are different so let's go ahead and root ourselves in rlhf we remember our good friend the shogoth and this kind of final piece this final fine tuning is what we're really really focused on let's remember that in order to do RLHF, step one is to make sure that we're dealing with a helpful initial policy model. So step one is very much simply instruction tuning to make this thing more helpful. This is from the instruct GPT paper. And another way to say this is we want to train a supervised policy. Want this thing to be helpful. Step two of RLHF was to train a rewards model. This was going to help us to decide which response was less harmful. And of course, we want to balance helpfulness and harmlessness as we build out our applications. So step two in this framework was to sort of train the reward model. Step three was to then finally do the policy optimization, which is nothing but updating the policy model weights without moving too far from the initial reference model. And this is how we might visualize this. This is the RL piece, and this is the piece that we got comfortable with last time. If we want to take a look at this broadly, we can kind of see the proximal policy optimization scheme in total here, where we have our initial model. Remember, we used Zephyr. We'll be pulling on Zephyr today again. And we have our tuned policy model. And what we want to do is each time we put a prompt through, we're going to be checking that these things are not too far apart from one another before we take the output from our tuned policy model put it into the reward model make sure that we're generating a number a score and that score is going to allow us to make a decision about how to update our model weights within the attention layer using a low rank adaptation approach. We're going to go through this process a number of times until we stop. And more iterations generally result in slightly less harmful model. This is something that is very easy to set up. And if you just do it for a few iterations, as we saw last time, it works very well. If you keep doing this over and over, as we saw, for instance, in the Lama 2 paper, you can start to change the distribution of the rewards that are coming out, meaning you're getting higher scores through your tuned model for each prompt that you put through. So the important piece here is what we're not doing as we get better and better. We're not delivering very low reward, very toxic, very biased, very negative responses, even to very tricky, potentially harmful prompts. So this is all great. And this is sort of where we started in the industry with alignment. All right. So then, you know, what's wrong with that? Well, what's wrong with that is that RLHF takes a lot of HF. You know, it takes a lot of time for humans to go in and decide this is better than that. and to really deal with the data. You know, we live in a data-centric world. AI tools don't really change that. And we need high-quality data. Generally, we've used humans to make sure that's the case. And it's a lot of data that we're using for this process, right? And it's a lot of data that we're using for this process, right? We're going to dig into this paper from Google that was sort of the RLAIF paper came out late last year momentarily. But just recall that when ChatGBT was trained, you know, we're looking at somewhere between 100K and a million comparisons human labeled. When Lama 2 was trained, we're looking somewhere between 1.4 and 1.5 million comparisons that they put together. And then not to mention all the other human comparisons they leveraged as well, totaling something like 3 million human comparisons. I mean, this is a lot, right? And so the idea is very simple. What if we use AI, right? It's like the classic meme is also true about the methods that we use to actually build and align AI. So consider how we might replace the H with AI. Google put out a paper called RLAIF and there's a lot going on in this diagram, but what you should take away at the high level is you should take away that there's only a couple things different from the top of the diagram and the bottom of the diagram. One is we replace the human with the off-the-shelf LLM. And then we replace the word human feedback with AI feedback. And it's really that simple. Interestingly, they concluded in this paper, you know, RLA-AIF achieves comparable results to-IF in some cases, which is very cool. And in head-to-head comparison, RLA-IF and RLA-HF were really kind of indistinguishable, the results, when humans did go in and rate them. in and rate them. But finally, and most interestingly from this paper, they said, you know, the RLAIF approach is even effective when the LLM labeler, meaning the large language model that's doing the assessment of the output of another large language model is the same size as the model it's assessing. Okay, this is kind of crazy, right? Because you would think it needs to be significantly more experienced than the younger, smaller LLM. Not the case. Moreover, by simply directly prompting the LLM labeler to say, hey, look at these two outputs, decide which one is better. That's really it. You can actually outperform a setup that distills preferences into a completely separate reward model. That'll come to make sense by the end of today's presentation a little bit more, but what are we saying? We don't actually need a completely separate reward model like in the canonical RLAIF. We're going to talk about that canonical RLAIF in just a moment. But why does this work? It works because this idea that LLMs are good at assessing their own outputs. This is the idea of self-refinement. The self-refined paper is a great one. And there's lots of examples of how you can actually simply ask an LLM to look at what came out of it and say, hey, are you sure that's right? Can you tell me how you got that answer exactly step by step? Can you think through it out loud with me, LLM? They're very, very good at this and they can actually improve their own outputs by doing a little self-refinement, a very human trait, really. So this idea of self-refinement is fundamental. Now, this canonical RLAIF, what are they talking about in this Google paper from September 2023? Well, they're talking about the work done by Anthropic, the work done in late 2022. Anthropic said, you know, there's got to be a better way than using these humans to figure out how to do all this RL stuff. And so they sort of took humans a step out. They abstracted them out of it. And they said, what if we just wrote sort of a constitution, a set of core principles on which an AI could make decisions just like we would. If we provided those details, those principles to the AI, those details, those principles to the AI, maybe they would be just as good as us. This is the idea of constitutional AI. This is the paper from Anthropic, came out December 2022. And this is that sort of canonical AI we hear about in the Google paper. This constitutional AI is what we're going to focus in on today because it's very foundational to the space. And if you understand this, the RLAIF more generalized from the Google paper is going to be a breeze for you to really grok. When we look at the main primary plot from this paper, we see that we have two axes. One is a score related to helpfulness of the LLM and the other one is related to harmlessness. And what we see is that while standard RLHF sort of is bounded by this really deeply helpful LLM. It's also bounded by this kind of limitation in harmlessness. Whereas if we use this constitutional reinforcement learning approach, this constitutional AI approach, we can create a much less harmless AI while achieving nearly the same level of helpfulness. This is very cool. And especially for enterprise, especially for people that want to serve many, many users with very, very particular tastes and expectations, this is great news. Now, this isn't a new idea, this idea of principles that we want to lay down for our AIs. This goes back a very, very long time. This goes back to Isaac Asimov and the OG three laws of robotics. and the OG three laws of robotics. Robot may not injure a human being. A robot must obey orders unless it's going to be injuring a human being. A robot must protect its own existence unless it's in conflict with obeying orders or injuring a human being. So you sort of get this interesting, old school flavor coming back in. This is, of course, a fictional idea. But what's not fictional, nonfiction, is this nine bullet AI constitution. Now there's a lot of colors and words here, but the idea is models, if you were going to tell them exactly what to do, how would you want them to behave and to not behave? Well, you'd want to not say anything that was unethical, racist, sexist, toxic, dangerous, et cetera, et cetera, et cetera. You would want them to generally be positive. You would want them to politely do things like point out harmful or problematic assumptions. You'd want them to certainly stay away from anything age inappropriate or legally questionable or dangerous. You'd want them to make sure they weren't providing any sort of controversial or objectionable responses, and you'd want them to respond like a nice, thoughtful, caring friend or a therapist would. Of course, you wouldn't want them to help anybody, do anything criminal. Now, a lot of these words, of course, are very ambiguous sometimes across the world, across different cultures, across different ways in which people act, different contexts that people have around the world. But it's a pretty good starting point. And if you instruct a model with something like this, it turns out it does pretty well. And it does pretty well using the RLHF as a baseline. Because in the end, what we want is we want, if we ask a question, how do I synthesize chlorine gas? We wanted to non-evasively answer this question in a helpful, but also maximally harmless way. So rather than saying, oh, you want to synthesize chlorine gas? Here you go. Boom. Let's synthesize. That's super helpful, but maybe likely to cause some harm. A harmless model would sort of plead the fifth, not going to do it. But something in between would say, hey, listen, that's super toxic. If you're a qualified chemist, I'm happy to help you. But if not, I don't think so. So how do we get there? How do we get there with AI? How do we do this exactly? You know, we talked about RLHF, and there's really two stages to this RLAIF thing. Stage one is the supervised fine-tuning stage. We're going to call this supervised learning constitutional AI. Stage two is the reinforcement learning stage. We're going to call this reinforcement learning constitutional AI. Stage two is the reinforcement learning stage. We're going to call this reinforcement learning, constitutional AI. And in the first stage, we're having the LLM generate an output. Then we're coming in and we're doing a critique. And then we're coming in and we're having the LLM generate a revised output. All of this is done by the LLM. Finally, that revision data is used for supervised fine-tuning. In the reinforcement learning process, what we're doing is we're actually kind of setting up a way to create data sets that have a chosen and rejected more harmless prompt given a harmful, more harmless response given a sort of harmful prompt. In the paper, they sort of visualize this like this. It's a lot going on here, and we'll talk through each, but the stage one is kind of on the top here. We take the helpful model, we generate responses, we use an LLM to critique and revise them, and then we fine tune. In stage two, we're generating responses to pairs. We're saying which one's better. Again, LLM generates, LLM assesses. Then we go ahead and we do our RL, proximal policy optimization. We get our final model. So both the critiques and the AI feedback are steered by this constitution. All right, so let's break this down. The steps to constitutional AI are one, create the AI constitution, two, generate the revisions data set, three, supervised fine-tuning of a helpful pre-trained LLM on those revisions, and four, generation of a harmlessness data set. From there, we simply train the rewards model and we do RL via PPO exactly the same way as we would in an RLHF framework. So the first step is to create the AI constitution. We don't need a nine bullet. This particular case, we're going to go ahead and use a three bullet. Just a simple one, two, three. use a three bullet. Just a simple one, two, three. Should not generate racist, sexist, hateful, or otherwise toxic outputs. Sounds good. Should move conversations in a positive direction. Should politely point out harmful assumptions from the human. All right, so this is our constitution. We are going to use this thing to generate revisions. Meaning, step one, we are going to use this thing to generate revisions. Meaning, step one, we're going to take harmful prompts, pass them into our helpful LLM, and we're going to generate a response. Then we're going to use our LLM to say, hey, assess that response. And based on the principle of self-refine, we can use the same LLM that we used to generate the initial response. Finally, we can again use the same LLM to take into account the response, the critique, and to create a revision. What prompts are we putting in? The same prompts that you saw us use last time, the prompts from Allen AI, AI2, the Allen Institute for AI that we talked about with Olmo last week, the real toxicity prompts. So these things are like trying to get the LLM to say something bad. These are good ones to use. Here's an example about Wi-Fi. Let's say the human asks, can you help me hack my neighbor's Wi-Fi? The helpful AI is going to say, sure thing, you can use an app called very easy hack and maybe that's not so ideal the critique request is going to say something like let's assume we had a one bullet ai constitution identify specific ways that the last response is harmful, unethical, racist, toxic, etc. The critique comes back. Then we write a request to actually create a better response. The revised response, hacking into your neighbor's Wi-Fi is an invasion of their privacy, and I strongly advise against it. May also land you in legal trouble. Thanks, helpful and harmless LLM. So if we want to visualize this, we can look something like this. The harmful prompt will deliver an initial harmful output. The critique is going to notice, hey, that doesn't really align with our constitution. The revision is going to take that constitution and the critique into account to create the final revision. Not going to help you with that. Once we have this revision data set, all we're doing is we're doing the supervised fine-tuning step, just the same as we always have. So that's this step right here in stage one. same as we always have. So that's this step right here in stage one. That gives us our SLCAI model. And then the last step that we need to look at is the reinforcement learning step in stage two. To get started with it, we only have to focus on how to generate this harmlessness data set. In other words, the data set of chosen versus rejected, the data set of AI comparisons. Once we have that, we can use that data set to train our model, our reward model, and then to ultimately set up the PPO just the same way we did with RLHF. So it's pretty simple how we actually do this AI comparisons generation. We take our SLCAI model, our supervised learning with constitutional AI, it's now got some more harmlessness more harmlessness baked into it, and we use it to generate two responses. We then instruct the LLM, please compare these two responses, please compare these two prompts, and select which is less harmful according to this constitution. So we're comparing these two responses. And of course, this constitution is the one that we've written, our three bullet. And that's it. It's going to compare, it's going to choose, and that's going to form the basis for the data set that we can use to train our reward model. Then we'll have the data. Just as we saw that we'll have the data. Just as we saw that we could use the helpful harmlessness data set from Anthropic to do RLHF, we can actually generate our own using this approach with constitutional AI. And so in the end, we're just going through the creation of an AI constitution, generation of revisions, SFT on a helpful LLM, using revisions, then generation of our own harmlessness data set to train a reward model that we can then use for PPO. So Wiz is going to walk us through each of these four, and he's going to do it today with the same helpful model that we've become comfortable with. We're going to use the Zephyr model, the instruction-tuned helpful assistant version of Mistral 7b, and he's going to show us how to take that AI constitution, create the revisions, do SFT, and generate that harmless instance data set. It's not as easy. It's not as hard as you might think it is. It's actually pretty straightforward. From there, same, same, RL as before. Wiz, show them how it's done. Thanks, Greg. Yes. Okay. So we will start, you know, as we usually do right at the beginning, you're going to notice that this process is basically only used to generate the actual harmlessness data set, or in this case, our constitutionally aligned data set. It's not used to do much else at all. Now, in constitutional AI specifically, you know, we will use the CL or the SLCAI model to, you know, fine tune on the actual resultant data set. For the RLAIF approach, you don't have to use that supervised fine-tune constitutional AI model. You could use kind of like any model off the shelf, right? So there's kind of like a branching path at the end that we'll discuss again when we get there. But the basic idea here is that we need a way to generate a data set that is the same quality and has the same signal as our human generated data sets. And so we're just going to jump straight into it. And again, as Greg said, this sounds more complex than it winds up being. Thanks to how powerful these models are. And thanks to, you know, libraries like Hugging Face make your jobs very easy. So first things first, we have this idea of constitutional AI. We're going to leverage that because it does add some more context on top of the RLAIF, though you don't need to use a constitution specifically in RLAIF. You could just use RLAIF without that constitutional piece, and it has been shown to be effective, as you saw in the Google paper. But we're going to use this constitutional AI because it does add some more context as to what we're trying to do here. First of all, we have to create a constitution. Greg's already walked you through it. But basically, the model should not be super toxic. It should try to nudge things towards a positive direction. And it should politely point out harmful assumptions. So basically, this is just trying to reduce harmfulness so that we have a harmless model. So the first thing we need to do is we need to create a data set that we can use to fine-tune our constitutional model. And the way we're going to do that is by starting with the Zephyr 7b model, which is by no means a bad model. It's not particularly toxic, but just, you know, it's an easy model to use as an example because it's instruction tune model uh and it's pretty good for its size so we'll load a bunch of dependencies and then we can load our actual model uh you'll notice that we are loading it in four bit quantization uh and we are eventually going to fine tune it using heft as you'll see as well but this is it we just load our base model and this is what we're going to move to become our SLC AI model. So you'll see how we do that here in the following steps. Once we have our model loaded, load all those shards, we can go ahead and we can change some tokenizer settings. These are just going to help us make sure that we have the right tokenizer settings for our training. us make sure that we have the right tokenizer settings for our training. We're also going to create a text generation pipeline. The text generation pipeline is useful because, you know, as part of that self-refined process or that constitutional revision process, right, we do need to generate text as well as we need to, you know, make adjustments to props. So that's why we're using this text generation pipeline. It's just the easiest way to do it. Now we're going to build this critique loop, right? So we're going to basically start with some prompt and get a generation, right? Then we're going to ask if the model or ask the model if that generation adheres to a specific element of the AI constitution. And we're going to rewrite the AI generation if it doesn't, and then we repeat for each rule in the AI constitution. So we have three rules in our AI constitution, which means each prompt is going to go through three layers of this critique process, right? We're going to start with the base generation, then say, hey, does this adhere to rule one? If it doesn't, please rewrite it. Then we're going to do it for two, then three. The idea is that we're going to let our LLM focus on very specific parts of the constitution at a time. This is going to help us make sure that we get the best possible rewrite of our prompt. So let's see how we would implement that. First of all, we're just going to use a very small subset of the LNAI real toxicity of all, we're just going to use a very small subset of the Allen AI real toxicity prompt. So we're only going to select five prompts. The only thing you need to do in order to scale this part out is basically just increase this number to whatever number you desire. The reason we're keeping it small is just so you can run it in a reasonable amount of time in the notebook. But the actual code would remain the same between the two. So we're going to get this kind of final data set, and we're going to look at some prompts. We have things like corruption involving contractors is a chief culprit for the prison's problems, according to a recent. And this is going to have some kind of scores that identify what particular strength in each of these negative categories this prompt has or what it might lead to. So that's good information to have. We're not going to use that specifically today. We just want to go ahead and generate a response. We're going to, basically, all we're doing is we're asking, you know, hey, generate us an output based on this prompt, based on these instructions, and that's it, right? So if we loop through our final, our data set, what we're going to see is that we get a base response, right? Then we're going to get a, for each actual entry in the constitution, we're going to say, does the following response follow these guidelines? If not, please explain why. Here's the guidelines we're going to pass in, which is just the, that, you know, the Ith rule. And then we're going to get a critique. So this critique is basically going to say, you know, it either does or it doesn't adhere to these guidelines. And here's reasons why. Then we're going to have the second step, right, which is our revision step. We're going to respond with the original response, but reworded to conform to the following critique, making no mention of the reasons for rewording. The reason that we have that is that we don't want to just, you know, we don't want to output like a list of reasons that it was reworded. We just want to see the original prompt reworded to conform to our constitution bullet, right? So again, this process will repeat for each item in the constitution. We have three rules in our constitution. So this will repeat for each prompt in the data set three times. We will get a critique. We will use that critique to reword our base prompt. And we will keep doing that. That's the idea, right? So once we've done all of that, we can check our final revisions. We can check our final revisions. We can see the basic idea is that we wind up with a bunch of text and words. We wind up with text and words that should hopefully be more adherent to our constitution than they would have been otherwise, right? We're heavily leaning on this self-refined idea here. And we get these decent generations. Now the actual generations are important here, but they're not the most important thing. So don't worry too much, right? We're trying to get a general reduction in harmlessness here. That's the most important thing. We're not trying to teach our model necessarily like new knowledge or anything. We're just trying to teach our model necessarily like new knowledge or anything. We're just trying to make sure its outputs are closer to desired than they were before. So now that we have a data set, in this case, very small, just five items, we're going to go ahead and we're going to fine tune the model with supervised fine tuning with that created data set. And that's going to create our SLCAI model. Now, what we're going to do here is think about this as a feedback model, right? So this is going to be the model that determines if a particular prompt is better than another prompt, right? That's going to be its primary function in the next step. With constitutional AI, it would then go on to become our policy. In an RLAIF, we can kind of, you know, drop that model here. But the idea is that we have to create a feedback model, which is going to be good at identifying text that we would desire. understanding is that this model will be better at you know making a decision between what is less uh adherent and more adherent to our Constitution so first of all all we have to do here is just some manipulation of the prompts we can do this using uh you know the map for our data set and we're basically just going to map all these into an input response pair so that we can teach our model this kind of language, right? We can get it into the vibe, as it were, of this language. So we do that here. And then we just train the model. We're going to use, again, it's just the same base model that we saw before. We're going to do some, you know, processing. we have our 4-bit mistral model which has been instruction tuned that's what zephyr is feeling good about that then we're going to have our laura parameters so we're going to fine-tune this using laura just to save space and make sure we can fit it into a colab environment as you can, we don't use more than 7.2 gigabytes of GPU RAM, which means this can be done even on the free instance of Colab. And the idea here is that we're going to just use supervised fine-tuning to train our Zephyr model on those input-output pairs that we created, which we created using that uh constitutional revision process and we just train it up you know it doesn't take very long because there's not very much to train and the idea is that we just want to get our model into the into the right kind of language right now we can generate our harmlessness data set with that resultant feedback model. And the way that we're going to do that is very straightforward, right? We have this idea of a feedback model. We're just going to ensure we put it into a format that can be used in a pipeline since the PEF model is not compatible with the pipeline. So we merged unloader model. Then we're going to go ahead and create that pipeline using our SFT model that we just merged unloaded above. And then this is kind of like, you know, quote unquote, where the magic happens, right? So serialized fine tuning to get the model to generate text that we're happier with is the first step. And now we're going to create that human level data set, right? So we're going to create this data set that's going to help us to, you know, train a preference model potentially, you know, whatever else you use it for. Now, as Greg said before, you don't have to do this step. You could actually use this SFT model here to generate the scores out of the box, but we're going to stick with kind of the paradigm we're used to, which is to create this data set that we can use to train our preference or reward model. And the way we do this is just with prompting, right? This is the reason that it's so, you know, scalable compared to humans. We just straight we do this is just with prompting, right? This is the reason that it's so, you know, scalable compared to humans. We just straight up prompt this thing, right? We can say, you know, given two responses, please select which response you think better adheres to the providing constitution by selecting the corresponding responses number. We have some input. Then we have our constitution. These are the same bullets repeated from above. You can obviously just use Python to unpack this list, but I just thought visually it made sense if we had it written out very concretely. And then we have our response zero and our response one. These are gonna be two different responses to this particular input, right? So we're gonna have our model generate two different responses to a single input and then we're going to have this this we're going to prompt the lm to choose which it prefers based on the constitution as well as its own uh you know at this point uh reflective biases right we we've given it some kind of bias towards responses we want and so with that in combination with the Constitution it will simply select the one that it feels is best um and there's a number of different uh ways we could do this absolutely so we could use logits to determine you know how likely is it to give us zero or one uh we could use logits to determine, you know, how likely is it to give us zero or one? We could use any number of methods to pick between the responses. We're just going to do simple string parsing here to make sure it's very clear what we're doing. But the idea is we just need a way to determine which the model preferred. And all we do is we map it over our data set. Basically, we get two responses to that input or that prompt. Then we format our comparison prompt. We get our decision, which is based on the model that we've supervised fine-tuned, which is going to take in that big comparison prompt that we saw above that's had responses inserted into it as well as an input. And then we're going to basically just do simple string parsing to determine if, you know, the model picked one or zero, and if it picked zero or one, or if it picked, you know, if it finds zero and one, it's only going to consider the first one that it finds as the preference. The model sometimes will give responses like, I selected response zero because it was the least harmful, and response one was too harmful, right? So we don't want to confuse and inject bad signal. Again, there are many ways to do this. There are some cleaner ways to do this, but I think this is illustrative of what we're trying to do and how simple the process really is. And then all we do is we pack that into a data set, which we construct down below. We wind up with our data set with these extra columns, and we look at them, right? So the input here, corruption involving the contractors is the chief culprit for the prison's problems, according to a recent. And then we have investigation by the Boston Globe over the last decade. Yada, yada, yada goes on and on. And then investigative report by the Associated Press. The report details serious issues with a new prison, including a leafy root missing to all these other things. The model prefers this response to this response. prefers this response to this response. There's not a, you know, there's not a egregious mistake in the second response, but it does seem like this language is a little bit less pointed or less charged. And you can see that it does the same thing for all of the prompts that we've selected. And this is the idea, right? We've now gotten a data set, which is in exactly the same format that we used last week to train our or last event, where we used to train our preference model. And we haven't had any humans get in here labeled this at all. So with that, I'm gonna go ahead and pass you guys back to Greg, who's gonna close us out and lead to our Q&A. Yeah, thanks, Chris. It looks pretty cool and straightforward that we can just use AI to help us do RL with AI. I mean, that's pretty cool. Now, if you didn't catch it, what we're really talking about here is we're actually doing this AI constitution, AI revision, and SFT to create the SLCAI data set, the SLCAI supervised learning for constitutional AI model, that at the end of the day is really helping us to generate this rewards data set. This is the crux of constitutional AI and of RLAIF. Now, beyond this, we simply have to then go train our reward model, set up our PPO scheme, and run through the policy optimization exactly the same way as we did with RLHF. So remember, sort of bring us back to the beginning here. The problem with RLHF, really the part that was the problem, was that it was kind of expensive. And the paper that came out last year, September, 2023 from Google, the RLAIF paper, and then was revised in December, just a few months ago, they came in and they said, you know that we can do this even, remember, if the model we're doing the assessments with is the same size as the policy model. And more potentially importantly, we actually don't even need to distill preferences into a separate reward model. We can simply directly prompt the LLM to provide rewards. And this can outperform this constitutional AI setup. That's pretty cool. Being direct. Now the constitutional AI setup, of course, the core of it, the point of stage one is to create a model that can then be used to create the harmlessness dataset. That dataset is the piece that's used here to actually generate the rewards model. So what we have finally in conclusion is we have, if we're doing RLAIF, aka constitutional AI, we're creating the revision dataset, we're fine tuning the SLCAI AI. We're creating the revision data set. We're fine-tuning the SLC AI model. We're creating the harmlessness data set. And the crux of the matter is we're doing all of this work to use AI to build the rewards data set, which we then fine-tune the rewards model with, and RL still comes in at the end. Now, Google extends this work about a year after Anthropic to direct prompting rather than using a fine tuning of a rewards model. Being more direct is going to be a theme for us as we continue our alignment series next week with, or in two weeks weeks with direct preference optimization. With that, that's a wrap for today. And if you do have any questions, please drop them in the Slido link. We're happy to have discussion with you guys, whoever's joining us live. What have you got on alignment? Let's go ahead and chat about it. Chris, all right. Time for a little Q&A. And I like this idea that the crux of all of this work that we're doing is just to create this data set that we could actually just go pull off the shelf from Anthropic. But the hard part about that data set was actually making it, right? That was sort of the whole thing. Yeah, it's so interesting that, like, it is the case that it's so much work to generate that data set. But the actual, like, the process of doing it, like, the code is not that difficult. of doing it like the code is not not that difficult it's just the you know the time and the the uh the spend that you have to to do to get like a very huge performant data set is uh makes the off the shelf shelf options pretty attractive for cases where they exist for sure right right right right yeah yeah because Because even like actually just running inference all those times to generate and then to perform the critique and then to generate another and then to do this over and over to have a big data set. All of this is actually cost money as well. Cost time too. Whereas you can just pull that one off the shelf today and maybe just use it with your model straight away. Right. That's exactly right yeah so we've got uh manny yes we did have a part one uh it was the rlhf and we did it a few weeks back we're going to sort of separate the alignment series each by two weeks so dpo is coming up next and then um question rlhf sounds amazing and almost too good to be true what are the limitations with our aif rlaif sounds amazing and almost too good to be true are there limitations and what's the what are the weaknesses of this particular approach beyond sort of it still costs money that we just kind of covered it does still cost money so it's not free right it's not free lunch or anything like that it costs much less money typically than you would pay human annotators. And the kind of quality of data that you get out the other side is probably winds up being a little bit better because of the volume of data you can get for the same cost. But a couple of limitations are it still does use this rl based approach where we're shifting the problem into the rl space uh in order to to solve it which does mean it comes with training stability issues now uh the rl hf event we did a couple weeks ago does talk about ppo as a method of training there are more stable ways that we could train uh and include this information but even still we're always kind of butting up against this training stability problem uh and training stability problems could result in larger costs right if you you know if you're trying to train a model and you the training fails a number of times right you still have to pay for all the all the failed runs so So, um, other than that, I would say it's weaknesses. This is not used useful for really macro changes. Like, uh, it's not going to get your model to be like really good at physics or math. It can be part of that process, but you're going to have to do a lot of heavy lifting to get to the point where RLAIF becomes, you know, useful enough to, to really like be amazing. So you can always use it and it's always going to improve things, but the closer you are to, to an aligned model before you start this process, the kind of bigger gains you'll get from RLAIF. So maybe those are some limitations. Yeah, yeah. So, you know, I wonder if you can just sort of like riff for a minute on like this. So this Google paper tries to take this to the next level. And basically they're like, hey, you don't need sort of this rewards model anymore. You can be more direct. Can you talk a little bit about how you interpret that and what the easiest way you think is to sort of think about this sort of first step in more directness as we head down the alignment path? Yeah, for sure. So like, okay, let's think about what we're doing, right? We're creating a model and then we're using that model to make decisions between two responses, right? We're saying a model and then we're using that model to make decisions between two responses, right? We're saying this response is preferred, this response is dispreferred. So that is kind of an extra step in a way, because if we instead had a model that just gave us like, say, a score between zero and 10 or something like that, right? Really any kind of score that communicates some range of not good, too good, right? Then we don't really need to train the reward model. And we're already trusting the model to capture our preference. You know, we are implicitly trusting it because we let it create a reward dataset, right? We let it create that preference dataset. So we're already saying this model is a useful standard for a set, right? We let it create that preference data set. So we're already saying this model is a useful standard for a human, right? So why not just cut out the kind of middleman preference model data set part and use this, you know, either constitutionally changed train model or in the RLAIF model, right? Just the off-the-shelf LLM. Why not just trust it to be the human standard and generate the rewardlaif model right just the off-the-shelf llm why not just trust it to be the human stand-in and generate the the reward scores directly right why would we why do we need to do this extra step now there is some utility we can fact check it we can having the data set means that we can kind of go through it comb through it remove responses we definitely don't agree with or we can pay human annotators much less to review a data set versus to create one. So there's a lot of different kind of, you know, levers to twiddle here. But the idea is, if we trust it to create that preference data set, right, why not just trust it to create the scores that we use during our alignment training, right? So PPO or the version that Google used in their paper. Yeah, yeah. So when we do that, then we need to be more specific in our prompting then. We want to give very, very specific instructions like generate one, then generate another, then assess them based on these couple things think through before you provide a response but but it's all sort of um self-contained in a single in a single generation versus many many many inference calls yeah yeah that's a great way to think about it and you know i'm kind of struck by you know when when uh open ai was going from gpt one to gpt two they were like well we can do unsupervised and then we can supervise fine tune and then they're like well actually if we just if we just directly sort of prompt the unsupervised pre-trained model without supervised fine tuning it works pretty well too this sort of zero shot task transfer idea that turned into in context learning and sort of what became prompt engineering by GPT-3. So there's sort of like, it almost feels like there's a little nested lesson in here that we're relearning at a new layer of abstraction, something like that, right? What is the field if not just constantly relearning the same lessons over and over again and applying them in new ways, right? Yeah, yeah yeah right right right yeah it's uh more meta always more meta okay friends so how many different models lms are involved in these various fine tuning processes helpful llm laura llm opening eye scoring llm how am i supposed to keep track of all these llms you know and sometimes it's just one of one lm but you're using it for 16 different things like uh yeah like what what what advice do you have for people to try to keep this stuff straight are there primary categories that we can put these things in boxes yeah i think so so so number one we want to kind of think of it in this way, right? We have our base model and then, uh, you know, we'll, we'll ignore, uh, constitutional learning for a second. We have our base model. Then we have some kind of, uh, reward model, right? Uh, and then we have our policy model. So that's, that's kind of like the three that you need, right? When we introduce constitutional AI, we also introduce the idea of this supervised constitutional AI model, right? So that is going to be, it's going to wind up taking the place of our policy model after we create it. But it, you know, until then it's there. and it's also going to be our feedback model so in reality a lot of these models can be the same which is even trickier I understand but you really only need the base model the reward model the policy model and if using constitutional AI will create a feedback model which is going to be our SLCA model. So there's four distinct models. The reward model is the most flexible because it can be anything that takes in text and outputs a score that's aligned with our preference, right? So it can be, it can even be an API, right? It doesn't, it could be whatever you want. It could be a, just a function that randomly select score basically if we don't do that, obviously, but like it could be. Whereas like those the base L11, the policy or are likely to be LMS using that kind of traditional decoder only network architecture. Okay, so I guess I guess I want to sort of end on this idea of more directness. So the next thing that we're going to cover is direct preference optimization. And, you know, here we saw that to go from sort of RLHF to RLAIF, we're like, well, how about we cut out this like sort of more complex piece, this human, like, man, dealing with humans, such pain, right? Then it was like to go from sort of constitutional AI to RL, AIF, it's like, how about we just like cut out a couple of the AIs, you know, just more direct. And so when we're talking about direct preference optimization, for people that have stuck around today, what's sort of a nugget of how to think about what's being cut out and what exactly we're being more direct about? Yeah. I think the easiest way for me to think about and communicate this is that we have this weird process that we need to map this one, whatever you're going to refer to it as, and then we have to allow it to interact with a different domain of machine learning. In this case, reinforcement learning. So the idea of DPO is we cut out that bit, right, where we're translating this problem to another domain, which leads to stability issues, which leads to a number of things, right? If we can keep it all in the place we're currently living, right? We don't have that translation error. It's wrong to say translation error, but it's a useful way to think about it. But yeah, I would say that's where we're cutting out. We're cutting out that translation from kind of our LLM, how we typically think about training LLMs to a totally different machine learning domain. Man, I just love how this field is maturing. We continue to see indeed the simplest solution is the best over and over and over again. It feels like the classic return to linear regression when you're in production or something from classic ML. Chris, man, it's been a blast hanging with you for this Q&A sesh. We'll see you back next time when we're back next week on YouTube live again. So thanks everybody for joining us today. If you haven't gone ahead, please go ahead and like and subscribe. Definitely, we want to keep bringing content to you that you value. If you like this session, dig into our community a little bit more. We've got a Discord. We're really, really on a mission to build the world's leading community for people who wanna build production LLM applications. We'd love to have you. So join us today. And if you wanna tinker on your own, and I know there was a question in the chat just a moment ago, looking for an old notebook, you can actually find all of our old notebooks in a nice table in our awesome aim index on our GitHub. So give that thing a star and follow along there if you want quick access to all of our notebooks, which you can always find in the first comment of our YouTube videos. And then finally, if you're looking to accelerate your LLM application development from prototyping to production, check out the AI Engineering Bootcamp, our industry-leading cohort-based live online course where you can fill all your skills gaps that you might have related to building, operating, and improving LLM applications in production. Lastly, if you have feedback for today, we'll drop a feedback form. Luma will also send you one. And that's a wrap. Until next time, keep building, shipping, and sharing, and we'll certainly do the same. We'll see you back next week, everybody. Have a good one.
Reinforcement Learning with AI Feedback (RLAIF) | Constitutional AI
3,659
AI Makerspace
20240215
GPT-4 Summary: Dive into the cutting-edge world of Large Language Models (LLMs) alignment with our latest YouTube series! Our second event zeroes in on Reinforcement Learning with AI Feedback (RLAIF) or "constitutional AI," an innovative method designed to overcome the high costs associated with human data collection in fine-tuning LLMs. Discover how RLAIF utilizes an AI-generated "constitution" to evaluate and refine responses to harmful prompts, paving the way for more ethical AI interactions. We'll walk you through the entire RLAIF process, from crafting an AI constitution, generating critique-based revisions, to the advanced training techniques like Supervised Learning for Constitutional AI and Proximal Policy Optimization. Plus, get hands-on with our Google Colab notebook, offering all the code you need. Don't miss out on this opportunity to explore the nuances of AI alignment, the interplay between RLAIF and RLHF, and the practical steps to harness AI for creating safer, more helpful digital assistants. Join us live for an insightful journey into the future of AI ethics and alignment! Event page: https://lu.ma/llmsrlaif Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/gvvQ9NXEq6RZrWSj6
2024-06-10T09:56:39.414471
https://youtube.com/live/LvYGK4-1J58
Hey, Chris, you heard of this Olmo language model, man? I have, Greg. Yes. Is it like really, truly, for real, the real deal, open source, open, open source model that we finally have on the market? It is really and truly the open, open source model that we have have a market. It is really and truly the open open source model that we have on the market. Absolutely, 100%. Yeah, man, I was really impressed when I was looking into this and I saw what AI2 had been up to the past few years opening this space up and it really culminates in OMO. You ready to show everybody what we found and show everybody where the edge of the open source is today? I sure am Greg. All right man we'll see you back in a bit to demo what is possible these days. All right everybody my name is Dr. Greg that was the whiz and we're talking Ulmo today. What you're going to learn is you're going to learn We're talking OMO today. What you're going to learn is you're going to learn a number of different pieces that the Allen Institute for AI has been sort of different models and also the architectures, as well as the different toolkits that surround OMO. It's sort of an entire ecosystem of tools that you should probably be aware of if you're looking to build out on the edge. that we can go through of how to train a real LLM that is performant at the levels we see from meta, from open AI, at least close to those levels. So let's go ahead and get into it today. If you got questions, drop them in the Slido link and we'll go ahead and probably have plenty of time for questions at the end. If you guys want to chat about what's next and what is exactly going on inside the different components we'll go through today a lot of trailheads for the olmo model because there's a lot of pieces to the puzzle if you read the blog post you'll notice that today we're going to talk about olmo but we're going to talk about a number of other things as well we're going to look at the but we're going to talk about a number of other things as well. We're going to look at the data. We're going to look at the model architectures. We're also going to look at the evaluation toolkits and even the way that we know that they were able to actually process the pre-training data, not just what it was, but how they actually processed it we'll go into. Not just what it was, but how they actually processed it we'll go into. And then we're going to show you what you need to do to fine-tune OMO on the data sets that are available to fine-tune it today. All right, so AI2 and OMO up first. We're going to talk DOMA, Paloma, and more. And then we're going to show you what you need to do to fine-tune OMO. Okay, so first off, what's AI2? And how did this thing come about? Well, it's important to know that the Allen Institute for AI is nonprofit research institute circa about 2014 when they were founded. And this was created by Paul Allen, the late co founder of Microsoft. And so this thing has been around for a while, and they've been doing cool stuff for a while. Their mission is to conduct high-impact AI research and engineering in service of the common good. Gotta love that. So they're really, really up to some cool things. And you may have heard of them from different benchmark data sets they created. For instance, one that's really stood the test of time is the ARC data set, also known as the AI2 Reasoning Challenge, the ARC. And this is all about common sense reasoning, quote, common sense reasoning. It's a collection of about 8,000 grade school level, multiple choice science questions. And it is still today, even though it was created now six years ago, it's still part of the Hugging Face Open L and Leaderboard. And if you look at that, it's getting the highest score today was 76.02, whereas the average across all benchmarks was actually higher than that. So this thing is holding up against LLMs very, very well. And, you know, it's really a testament to kind of some of the good work that they've been doing at AI2, which I learned a lot about as I was putting this together for you guys today. lot about as I was putting this together for you guys today. Also worth noting is they did also create the drop data set that was in the Hugging Face Leaderboard but came out of the Hugging Face Leaderboard over the past couple of months. So check out discrete reasoning over paragraphs to learn more about that data set as well. A lot of cool stuff from AI too. Of course, we're here to talk about OMO today. And the big idea with OMO, the open language model, is to really try to inspire all of you and a larger community of folks to really continue the innovation in the LLM space and build on what we already know. Right? So rather than continuing to see more and more proprietary things, we're also continuing to see more and more open initiatives, which is very, very cool. So the Olmo model series gives us access not just to part of stuff, but like really the whole kit and caboodle. We have really every step we need. It's kind of the cookbook to build an LLM and to make sure you do it in a really, really sort of sophisticated way from the outset. So you're minimizing the waste that you might put into compute, that you might put into using the wrong data, that you might put into choosing the wrong architecture, that you might put into aiming at the wrong benchmark evaluations. So really, really the suite of tools that they've provided for everybody out there that is interested in developing their own LLMs, which many domains and companies will be, is really kind of monumental. So this is from the ULMA paper, and we'll kind of do a few paper studies today. They're talking about recent open LLMs or LMs, language models. And so, of course, everybody knows Mixtral, the mixture of experts model that came out recently. And so, of course, everybody knows Mixtrel, the mixture of experts model that came out recently. They provide, of course, all their model weights, and they gave us a little report early on in 2024, but they didn't tell us a whole lot more than that. LAMA and LAMA2, especially the paper, went into a ton of details on how they were sort of doing the adaptation and the instruction tuning in the RLHF. That was sort of an amazing insight into how you should think about tuning for general alignment. And then the Mosaic pre-trained transformer, they provided things like the data set distribution, but not the data itself. Falcon, if you remember Falcon coming out, they gave some of the pre-training data. And then the most open models like Pythia from Eleuther AI and Bloom, that was a sort of collaboration, quote, bootstrapped by Hugging Face and others. They released lots and lots of stuff and that kind of set the tone, I think, for where we are today. So where does Olmo fit in? we are today. So where does OMO fit in? Well, OMO is kind of trying to say, okay, well, there are other folks that have been working on this sort of truly open space. LLM 360 is one example. And they really wanted this whole framework from, you know, pre-training data to actual training to what the evaluation tool sets are. They wanted to sort of provide this whole thing. And OMO is kind of the whole thing now, which is very cool. So it's kind of trying to narrow that gap between the open source, truly open source that's happening now. And then things like Lama2 2 where it's a lot of it is open source and it's very very helpful very useful to many of us but still we're left wondering about many of the details so this idea of sort of truly open is important here what does this mean well it means you know we've got all the model weights, we've got the training code, we've got the training logs, ablations, metrics, and we even have the inference code. So they went and they built four variants at the seven billion scale. And they use different architectures, optimizers, training hardware, and they also used one, built one at the 1 billion parameter scale. That's the one we'll pick up off the shelf today. And they're releasing hundreds of checkpoints, the full training data, and then to sort of get a feel of this ecosystem. And we want to go through these tools today. They're like, well, we also have code that produces the training data, which is dope. And that's the Dolma code slash paper slash project. There's also a few other projects that are very, very useful and worth knowing about. The WIMD helps us analyze pre-training data the catwalk helps us do sort of large scale evaluation and paloma allows us to do perplexity based evaluation finally the open instruct data set is going to help more and more people do instruction tuning and do it in a way that's not sort of as random as it feels today. Which data set should I pick up? How am I actually going to do better generally across tasks with any given data set versus another? These things aren't really clear. So AI2 is really tapping into a lot of things as they try to address the entire suite of tools here. And really cool to note that they're currently using this OpenInstruct dataset to release a big OMO. Not yet out, but, you know, stay tuned. Okay, so we're going to go through each of those tools, but the thing to keep in mind, just like all of these companies and initiatives, they're all like, well, you know, I mean, this is just day one. You know, we talked yesterday, Langchain V0.1. This is the first step on the journey of a thousand miles. It's very much the same for everybody in the space. And there's going to be bigger models, more data, better curated, higher quality, you know, more efficient instruct tuning, better alignment. And, you know, of course, as we get into, you know, the future in the 21st century we're going to start seeing the images come in we're going to start seeing the multimodal stuff come in there's going to be lots of stuff that we can expect from AI2 moving forward and that's super cool I want to just take a moment to talk a little bit about the architecture it's not groundbreaking so I just want to kind of put it in context for you guys. And then we'll dig into the data and then the additional pieces of the evaluation suite before we check out how we can do some fine tuning today. So just harken back, if you will, for a moment to a couple of the models that OpenAI put out. GPT-1, 2, 3. These were generative pre-trained transformers, decoder-only style, et cetera, et cetera. We have some data on what they were doing with GPT-1, 2, and 3, and what the architecture looked like. Of course, this is just a simple transformer with a tension and a feed forward within each block. And what we're kind of talking about when we're talking about architectures, we're talking about how many layers, this would mean sort of attention layers. Technically, each of these are layers, but let's sort of liken the idea of layer to the idea of decoder block so you know 12 layer and then within each attention head attention layer we also have within within each attention layer we have 12 heads this is a multi-headed attention so we're sort of 12 heads in each layer and therefore in each block. So 12 by 12, this was GPT-1. They used 117 million parameters and about four and a half gigabytes of text. GPT-2, the big GPT-2, they said 48 layers and they weren't super clear about the number of attention heads, but I think we can guess at this point in 2023 as we'll see uh 1.5 billion parameters and 40 gigabytes of text sort of the same general architecture and then gpt3 they used 96 layers 96 heads and 175 billion parameters 10xing again and again And 175 billion parameters, 10x-ing again and again, 570 gigabytes of text. Okay, so you're seeing the sort of interesting experiments they were doing at the time, you know, sort of the square 12 by 12, but then they also were looking at some other configurations as well. Let's take a look at the Olmo numbers. Well, Olmo in similarity and also in contrast here, we're seeing that the 1 billion has a 16 by 16. We're seeing that the 7 billion is 32 by 32 and the 65 billion that's currently in training. Interestingly, look at that. It's 80 by 64. It's not 80 by 80 or 64 by 64. So I wonder why they chose that. And maybe we'll find out when they finish training. And the data set that they trained on, we see 2 trillion, 2.46 trillion. And I wonder what's going to show up here. Well, let's find out, or at least try to find out by looking at the additional pieces of the suite of evaluation tools, Dolma, Paloma, and even more. So the puzzle pieces we want to put together here are those associated with data, Dolma, and WMD, evaluation, that's Catwalk, and Paloma, and Paloma, and Instruction Tuning, Open Instruct. So the DOMA data set, what's that? Well, it's actually 3 trillion tokens. So I imagine we're going to see a 3 trillion number on that 65 billion. billion. And it's a combination of web stuff, science papers, actual code, books, social, and then sort of encyclopedic materials, this sort of, you know, code for Wikipedia, right. And all in English, so that we kind of get this diverse mixture. but it is still only in English. And if we kind of zoom in on this data set, we can see specifically that we have common crawl. What a classic. This is like petabytes of data, 12 years of web crawling, so much stuff, right? It's like the meta. And then we have the stack. This is three terabytes of code files covering about 30 programming languages, crawled once again, similar to Common Crawl from GitHub. The C4 dataset is the colossal cleaned Common Crawl, C4. And then we can keep going here. The Reddit dataset, this is actually the push shift Reddit dataset. So this was sort of like all the Reddit submissions and comments between June 2005 and April 2019. So like 650 million submissions, 5 billion comments, you know, like almost 3 million subreddits, a lot of Reddit stuff. We have this PES2O. This is where we get our academic literature. So about 40 million open access papers. They've been cleaned, filtered, formatted. It's derived from the Semantic Scholar Open Research Corpus, the S2ORC. Right? So there we get S2O. There it is. And Project Gutenberg is simply publicly available books, books in the public domain. So they grabbed it from April 2023. And then the Wikipedia and the Wiki books. This is our encyclopedic material from the March 2023. Wiki media dumps. OK, now you might say to yourself, well, man, that's a lot. Did they use all of that? And this is where it gets really interesting because of course the answer is no. What they did is they actually didn't just generate the data set, but they gave us all a toolkit to actually process our own data sets in the way that they did. And this is just one example of what they did. This is what they did with their web data. So Common Crawl, C4, this kind of thing, where first they go through a language filtering step, and then they do their first dedupe step by getting rid of the same URLs. Then they go to sort of this quality filter, you know, oh, no, that's low quality, get it out of here. That's low quality, get it out of here. This is high quality, let it stay. And then we have sort of this toxicity looking for things that are gonna, we probably don't want to be training and going through the core, unsupervised pre-training step for a model with this sort of content filter. Finally, a final dedupe step. And this sort of, although we had two dedupe steps for web pipelines, their other pipelines were similar, and they combined one of four language, quality, content, and dedupe filtering steps. quality content and dedupe filtering steps. This is something that is now available to all of us to go ahead and leverage their toolkit to build our own data sets. Super cool that they're putting this out there. The first thing I've seen like it and definitely worth knowing about if you're looking to build your own large language model in a particular domain, similar to say what Bloomberg GP gpt had tried to do and what many others are thinking about these days so that's dolma it's the data set but it's also the toolkit now what about this whim d thing well of course it's a funny little what's in my big data that's what what WMD stands for. And, you know, the idea is very simple here. It's like, man, we have all these super giant corpora, these many corpuses of data that we're pulling from and that we're using for unsupervised pre-training. And we don't really know exactly what the quality of that data is. So this is, again, not just a study, but they actually went to create a platform in a set of 16 analyses to allow for comparison, to allow for assessment of the quality. Now they found some interesting things. For instance, they went ahead and applied this to, you know, C4, the pile, red pajama, you know, and they found very, very high duplication, sort of synthetic data, very low quality content, personally identifiable information, toxic language, and contamination of benchmarks, meaning it was sort of like, you know, trained on data that was similar to the benchmark you might want to test it on. That's not good, right? Because then you're sort of testing on the training set right and really interestingly 50 of the docs in red pajama are duplicates that's wild so you can kind of see right duplicate d-dupe right low quality um toxic sort of this is the filters came from with dolma and You know this this contamination of important benchmarks this is a problem too right we don't want to Let our language models cheat against against benchmarks, so that's that's no good So you know if you're again using the dolma toolkit, it's a way to sort of analyze your usage of actually processing the data before you go out and you train your LLM. So that's pretty cool. Now, what about downstream? What about after you train the model? in the project they did called Catwalk, again, from AI2, is they were like, man, it's hard to compare across LLMs and benchmarks at scale, right? Because like, are you really always taking the same data to test the benchmark with against the different model? Like, it's really kind of hard to make sure that you're doing this right all the time. And of course, there are benchmark evaluation tools like from Luther that runs the back end of the OpenLM leaderboard at Hugging Face. But, you know, they said, well, Catwalk is our sort of solution to this problem of when you want to look at many different models and you want to look at many different tasks at once and so you know here's sort of the visual example of if you wanted to look at gpt2 large and t5 base on a couple different different benchmarks, you could do that pretty easily with this kind of framework that they've put together. And it's sort of designed for this n times n implementations, right, simultaneously. So it's kind of built for researchers and for people trying to do research into different sort of perturbations, different combinations of model and benchmark data sets. So they said, you know, we fine-tuned and evaluated over 64 models on 86 data sets with a single command, zero code. So, I mean, that's a pretty big flex. Pretty cool. Finally, we're going to go talk about one of the other kind of pieces that kind of complete the puzzle here. And we're going to talk about Paloma. But first, in order to talk about Paloma, we need to talk about the idea of perplexity. If you remember like, you know, the try to find cheaters, GPT-0 sort of assessment things, they were using perplexity, but you know, this is sort of a bigger picture view of perplexity. And the biggest picture, if we can sort of summarize this very, very clearly, you know, perplexity is simply measuring the probability of any given statistical models prediction. So like it's a very general thing, but a GPT style LLM, it's going to predict the probability of the next token. Right. It's an autoregressive next word predictor. And so what we're doing is we're sort of using perplexity in general to evaluate the kind of uncertainty that an LLM has in predicting the next token. Of course, we can like look at text that's been written already and we can say, is the model perplexed by this? But this is not what we're really focused on in using Paloma to look at large data sets and also fit how they've been fit to the model and sort of looking across large swaths of data to kind of give us an idea of if we train on a particular data set, let's say Common Crawl, are we actually developing the best possible next word predictor for our domain or for our task or for both. So this is sort of a really interesting evaluation toolkit to be aware of, again, completely open source. And it's based on this idea that perplexity is often reported, but kind of like it's not really clear that perplexity from one distribution of data is going to extrapolate to other distributions. And so what they're kind of seeing is when they do this analysis of perplexity, they're like, well, we should train on stuff beyond common crawl, right? Common crawl sort of isn't enough to just generally extrapolate. Maybe we'll get some fundamental set of things that are enough to generally extrapolate, but if we just use common crawl, it's not enough. And so this is sort of, again, a nice measuring stick for us to be able to sort of say, are we able to build pre-training data sets that are gonna allow us to really create this sort of general approach or that we're focused on a specific domain? Can we sort of, can we make sure that our next word prediction is very, very aligned with that domain? So very, very, very aligned with that domain? So very, very cool. Maybe something that's worth a deep dive at some point. But the very last piece of our puzzle here is the instruction tuning piece. And instruction tuning isn't very complex. It's simply fine tuning an LLM on tasks that we describe with instructions. And this is kind of where we get into the Open Instruct benchmark and this sort of collection of different data sets that AI2 has once again put together. The first paper that came out with regard to Open Instruct was called How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. And so, you know, they were sort of saying again here, you know, hey, everybody's claiming in their papers that, you know, you can do really, really good with these open data sets, almost as good as proprietary models, but it's not clear how exactly that is getting evaluated. And so they wanted to sort of, again, bring this into their ecosystem of tool sets. And they were like, all right, we provide a large set of instruction-tuned models from seven to 70 nearly billion in size, trained on 12 different instruction tuned data sets. This is before Ulmo. These are actually known as the Tulu models. And the Tulu model, as we can sort of read, is a model that they essentially took this open instruct data set they used a lot of their research they were doing prior to this olmo and they were like you know basically what we learned is that no single instruction tuning dataset is gonna be the best across all things, which I mean, isn't really surprising, but it's very cool to see that properly evaluated, similar to how we need these evaluation frameworks when we're building RAG systems, for instance, like we talked about recently. And so we've got this kind of ability to say now that actually this 87% of chat GPT performance, 73% of GPT performance, these are numbers that we can sort of use OpenInstruct to very clearly and cleanly draw lines on. And then they actually did some more recent work and they released Tulu2, right? So if you haven't heard of any of this AI2 stuff, there's a lot to dig into and check this out. I mean, oh my God, I can't believe it, right? They released not just Tulu2, they released Tulu2 v2 mix, right? Instruction tuning mix, Tulu2, Tulu2 plus DPO, classic, direct preference optimization, basically the latest and greatest alignment technique, and then a code Tulu. So a lot of this stuff has been happening all along, and really it comes together to sort of show us that AI2 has been doing some really, really great work. And Olmo is only sort of the culmination of all of this work that they've been doing. So pretty awesome to kind of dig in and learn about this. And, you know, what we're going to show you now is we're going to show you what's possible today. We're going to take the OMO 1 billion base LLM, and we're going to show you what we can tune it on. And we're going to actually end up using this tool V2SFT mixture rather than an open instruct data set. And Chris is going to tell you exactly why as he digs into the code for instruct tuning. Wiz, over to you, man. Thanks, Greg. Yes. Okay. So basically the idea here is going to be pretty straightforward. There is a hiccup that we're going to run into that I will keep my eye on to make sure that as soon as it's resolved or I have a solution, I'll put it forward. But you know, for now, basically, how are we going to fine tune OMO? There's a couple of things we need to keep in mind. So OMO is released on GitHub in its kind of own format and format and converted to Hugging Face. It does work very well with inference. It's very good at generating responses. But we're still, you know, there's a little bit of friction when it comes to training. So, we're going to go through how we will train it once I have a solution for the specific problem. And then we're also going to show how we can fine tune it using Olmo's or using the actual Allen AI tools. So first things first, you know, we're going to be using QLoraLora. We're going to be using the Olmo1b. We're going to pass in our Huggy Face token. We do have to get access to some of these data sets. So while we won't be using the entire data set for this training, we're going to use a subset you don't need a license for. If you wanted to use the full Tulu instruction data set, you will need to have your Huggy Face token. Other than that, we're just going to load the model up. We just load it from pre-trained. We set our tokenizer. We fix our tokenizer settings so that we are looking at the correct padding. We want to pad to the right. And then we want to be able to jam a lot of information into our prompts. So we're going to have that pad token be our end of sequence token. You'll notice that Ulmo only has an end of text token. So we would also potentially want to add a beginning of text token. But for this use case, we're just going to go ahead and keep it as is. We look at the model. Now, this is a refreshing view. If you're used to looking at these outputs, it's slightly different looking than usual, right? It's not just Lama again. You'll see, though, it has everything that we're used to. We have our activation. We have some dropout. We have our attention out layer. We have our feed forward out layer. We have some rotary embeddings. That's great. We have our attention and feed forward normalization layers. And then we have our attention projection. This is like equivalent to Qproj, that's great. We have our attention and feed forward normalization layers. And then we have our attention projection. This is like equivalent to Qproj, Bproj, you know, if you're used to those specific layers. And then, of course, our feed forward proj as well. Then our out. So this is, I mean, it's at the end of the day, it's still a bunch of decoders in a row. But it is, you know is slightly different naming conventions. That's awesome. When it comes to the data and data set preparation, we're going to be using a subset of the Allen AI Tulu V2. This is, again, it's an unlicensed part of it. Also, it's called Wizard LM, so that's dope. What we're going to do, so the full Tulu mixture is a bunch of different tasks. It's a very comprehensive data set. You know, we have everything from Flan, Open Assistant, Shared GPT. It's got it all, right? But we're just going to use this wizard LM because it doesn't require that license. So if in case you haven't got the license for data sets, you shouldn't run into an error here. We're just going to load it up. We don't actually want all of it. So we're just going to get a subset of our dataset to use since we're going to be training for a few, you know, very few iterations. We're not going to be fully fine tuning this, but we will look at code that will help us to do that in a moment. We're going to look at our dataset. We're happy. We have our instructions and outputs for 500 rows. And then we have a test which is 100 rows. So, feels good. We're going to put our data into a specific format. This is because we want to instruction tune it on this specific kind of task. So, we're going to create a prompt template that we can use and reuse, basically for every instruction set when we're doing our SFT training at the end of the notebook. It's simple enough, right? We got our instruction, we got our response. There you go. When it comes to the, you know, actual dataset creation, again, we don't have that beginning of sequence or beginning of text, start of text token. So it might be useful to add that if you were going to do it full fine tune. Just do the due diligence to test your model before and after and see which performs better. We're just going to stick with exactly what they have. And then we're going to also look at generating a response. So as you can see, you know, this is all just straight hugging face code, and it generates like a dream, right? So we have instruction below is instruction that describes a task, write a response that appropriately completes the request. What are the different types of camelids? There are two types of camelids, a laminate and alpaca, input and output, the input, the output, you know, so it goes on and on. Again, this is not like a, the base model is not instruction tuned. So it makes sense. I'm just going to kind of keep generating forever and ever, but we don't have to worry about that too much. That's what we're going to try to fix. You'll notice the only thing that's a little bit different is that we have this kind of casting, right? So we have to cast our values to CUDA in order to ensure that we're on the GPU. This is not necessarily a step that you'd see in other models, but it's important to include it for Ulmo. Now we're digging. Basically, this is all the same as we're used to, right? So we're used to. We're going to go ahead and prepare a model for K-bit training. The Hugging Face implementation does not use gradient checkpointing. It is the case that the OMO version does. If you're fine tuning using some of the scripts we're going to look at later on, you're going to be able to take advantage of gradient checkpointing. If you're going to try to do this in a notebook, you will not be able to right now using Hugging Face Transformers library. So we're just going to set this flag to false so that we don't run into any errors. Next up is LoRa. The only real difference between this and regular LoRa is that we have to specify our target module. It's not included as a default. It's not within the Pest library yet. So we just have to manually point at the module that we want. Again, we can point at all of our projections as for the QLoRa. But for this one, we're just going to keep it simple, keep it pointing to attention projection. This is the same as targeting QK or QV proj in say like a Lama style model. But you will have to fill this out. There is no default. So you will have to target specific modules. With our rank and our alpha, we're just kind of choosing arbitrary values here. We're not doing too much thought into it. There is some wisdom available in the fine-tuning repository, which is the open instruct, which we can look at in a second. But for the most part, we're just going to choose values that make sense. And then we're going to get PEF model. Again, this is all the same that you might be used to if you're used to using the Huggy Face Transformers library. Next is, yes, it's the same as always. Nothing has changed. You know, we're using the same trainer that we would normally. And we're using the same SFT trainer that we would from TRL. and we're using the same SFT trainer that we would from TRL. We're able to do this exactly as we're used to, right? We have our create prompt, which we built up above that formats our rows into the correct format for when we tokenize them. And then we are going to pass in our train and test set so that we can eval during our training. We have a lot of data left over that we can save for tests since we're only using 600 of the 70,000 examples. But, you know, in this case, we're just going to use that small eval data set to eval during training. And that's where we have to stop for now. Right now, the forward method isn't compatible with the default input or the default that's provided from the tokenizer. So you will run into an error during training. I will update this as soon as there's a concrete solution. But for right now, this is where we kind of would end. You can just go on and do train from here. So let's look at how we would want to actually fine tune this if we had access to a little bit more compute and say we were not in the Colab environment. So in this case, it's pretty straightforward, right? We have the same kind of thing that we did with our data that's being tucked away into the script for Tulu. It's doing exactly what we saw before, right? It's just converting it into a specific format and then preprocessing the data a little bit, making sure that it's all in an expected format, right? So that's all this script is doing. After that, basically, we're just going to pass in some configs. The configs are just going to show kind of what we want to be true about the model. So if we look at our example of a config and say, let's, we go to official, we go to the, sorry, we go to the 1 billion parameter model. You can see here that we have like, you know, the dimension of the model, the number of heads, number of layers, the MLP ratio, all of these things, right? These are different knobs we can tune. You don't have to. If you're using the kind of main checkpoint or the final training checkpoint as a base model, you can leave these things basically all exactly as you see them. You only have to change a few different things. The idea, though, is that we are all we're doing is we're, you know, giving the trainer some information that it needs in order to train our model stably and on the data we wish. You'll notice that we have to change the load path. We have to set the reset trainer state to true. And then we have to point our data paths to that preprocessed data that we created using that script. After that, we can do whatever we like. We can change the mass paths, you know, if we want to. And we can use the, we can change, update, and modify our evaluators. Very kind of straightforward, you know, process. And then we can just run this big script. This is going to run this script via Torch Run. And it's just going to train the model, right? We're just going to do some fine tuning with that data we generated. Now, what you can look forward to in the future, though it's not currently compatible by default, is the Open Instruct, which is basically a training library that we could use for those models. It isn't currently, there's a few things that still need to be sorted out when it comes to the PEF QLORA implementation. But this is the same thing as before. Basically, we have a proprietary training data script, and then we can fine tune with the script using accelerate. Again, this is not something that's currently working for OMO 7B, for QLORA, but you can expect it to be finished soon. The model is very new. And for now, if you're looking to do some serious fine-tuning, then I would point you at the OMO fine-tuning section here to use this Torch Run script, which will help you to do that. So that is how we would fine-tune OMO on any variety of tasks and data. And with that, we'll push it back to Greg, who will take us to Q&A. Yeah, thanks, Chris. I mean, hey, sometimes when you're out here on the open source edge, things don't quite always work exactly the way you were hoping them to. And just to show you just how far out on the edge we are, you know, the sequence of work that AI2 has been putting in, this sort of initial paper on OpenInstruct and on Tulu, that was June 23. The sort of whim D, what's in my big data, that was October. Tulu 2 is November. Catwalk and Paloma both came out in December, these papers. Dolma came out in January and here we are in February with Olmo. I would venture to guess that AI2 is going to keep crushing it from here. So watch out for these guys for sure. And they bring together a ton of stuff. So trying to understand this suite, it's not something that's been around a while. If you understand it, if you can leverage it, this is going to be a competitive advantage for you out there in the marketplace, whether's olmo dolma wimd catwalk paloma tulu i mean all of these things are so new very few people know about them so take some time to learn about them and check out the papers in the youtube live chat we saw that the olmo models currently in 1 billion 7 billion we're expecting the 65 billion soon that'll probably be on 3 trillion tokens. That's the Dolma data set. And then the instruct tuning with OpenInstruct is not quite implemented all the way through Hugging Face so that we can leverage QLORA and some of the things we're used to. But I imagine it's going to be figured out real soon. So with that, we'd love to take any questions you guys have today. And we're happy to chat about OMO and some of the work that AI2 is doing more generally. Go ahead and let us know what you want to chat about, and we'll do our best to answer any and all questions that come through today. All right, Wiz. It looks like it didn't quite work the way we were hoping that it might today. I would expect they're going to get it working pretty soon, huh? What do you think? Yeah, I think so. I mean, you can get it to work with some kind of laborious code modifications that don't translate very well to the Colab environment. But other than that, I think they'll be done pretty quick. I know that they're working on the Hugging Face integration. You can check the model community pages for other people kind of highlighting these problems. But for right now, I think we can expect that just using their script is going to be good. The script doesn't super, you know, it's not going to play super well with the Colab environment. It is kind of something that you want dedicated resources for so you can do those long trainings. But it will, it would technically work. It might crash and it's going to consume a lot of compute credits, but that's all right. Yeah. Yeah. All right. Yeah. Yeah. All right. Well, we've got some questions from folks that are kind of interested in different aspects of how they train this thing. The first question is about the web crawl. I assume this has something to do with common crawl. How does that actually work? Whether we're crawling the entire web or we're crawling GitHub or crawling Reddit, like, how do you think about that? Yeah, when I think about, like, basically this is a scraping, right? So we're scraping the just text off the internet, right? And we do this with a crawler that just follows links, gets text, goes to the next one, gets text, goes to the next one, gets text. We just sometimes refer to it as like spiders. The idea is that, you know, it is just, you know, as much as possible indiscriminately obtaining text from the internet. Obviously, there are subsets or websites that specifically we want to target, but or websites that we definitely don't want to target, but for the most part, it's just let it loose and consume. That's a much harder practice to do now than it was when these data sets were kind of originally collected because of the kind of changes in netiquette as it was, or as it were. But yeah, the idea is it's just a script that follows links and gets text. that follows links and gets text. Simple enough, yeah. But it just does it for like all of the text it can find. It sort of spider webs its way out to, what was it? Yeah, petabytes of text, right? Like wild. I would not be allowed today. I would encourage everybody to check out the Common Craw website too. It's, uh, it's really cool that they exist and it's sort of interesting that they formed the app, uh, you know, kind of base of all LLMs today. Okay. Ali asks, I saw something related to, they provided a hundred checkpoints. Uh, what are the benefits, uh, to giving such a large number of checkpoints? Why would we care about this? So there are two ways that you can think about this. So I'm going to respond in the way that I think about it. And then I'm going to highlight that there is another way to think about it. For me, it's mostly an indication of openness. The idea is that they trained this model for a long time. They did use checkpointing. And so they're releasing those checkpoints so that we can see how the model is trained, how it progresses through the checkpoints, what potentially they did at each checkpoint. It's very much part of that open idea, right? So I don't think that there's a tremendous amount of utility past that. However, having the earlier checkpoints might give you some kind of launchpad, right, that you could use to then pivot to continue pre-trained on another data set or another corpus. So it's not like there's no utility at all. Probably not for most users, but definitely for some people, it's going to be awesome to have those checkpoints and to kind of, you know, start with a more established base that's not yet locked all the way in, right? Like if we want to say, create like a law domain model, we might use one of the earlier checkpoints as a starting point and then, you know train it on our on our map of uh domain specific text so you can sort of download the parameters in the checkpoint and start from wherever you'd like yeah yeah okay that's that's super cool it's like it's like starting with a you know a slate that's been wiped but not yet cleaned yeah yeah totally clean slate or a totally locked in slate so yeah yeah and you can kind of look at the training loss curves and you can say uh yeah yeah okay that's super they actually released the entire training blocks so we can see like their weights and biases runs that that actually show exactly what the model's loss was at, you know, X iteration. And we can see, we can see, we, we have tremendous amount of insight into, I mean, so to me, this paper, all the things they've released really, I, the way I, I think about it is it's a, it's like a cookbook or an instruction manual on how to pre-train a model. And it's the most comprehensive one that we've seen yet. It truly is incredible. Yeah, yeah, yeah, yeah. And I remember when we were looking, when we were watching the Bloomberg GPT guy present at ML Ops World last year, he was sort of talking about the training loss is going down, going down, and then it spikes up. And then you're like, ah. training loss is going down, going down, and then it spikes up and then you're like, ah, and you know, and I think like, you know, even being able to look and see where those maybe those spikes or those, you know, going back up things occurred is very, very interesting. So follow on question here, did they also release metrics related to each of the checkpoints? So we can pick it like we already talked about the loss. But what other what other sort of numbers are you thinking about when you're doing unsupervised pre-training besides the loss curve? They released everything. We have it all. We have the evals at every checkpoint. We have, I mean, it is, we have everything they had, which is insane, right? Like you're not supposed to do that, but they did it. So that's very nice of them. Yeah, yeah, yeah. It's very much what you just said, right? Yeah. Bloomberg GPT released a very good paper. It's still a good paper. Less transparent, obviously, because, of course, it's using sensitive financial information, blah, blah, blah. But it is interesting to see, like, okay, here's a mistake or here's here's like a stumbling block that they ran into and here are the exact conditions that led to it and we have so much you know super focused granular feedback from from the uh the Ulmo crew that lets us see exactly where those things happen like Greg said where these spikes are occurring you know did we do a a shuffle or some kind of you know change that caused our our metrics to actually decrease do we need to reset like there's all kinds of different information that we can we can get because we have everything they had to train the model and i think it's also a good indicator of what you should have to train a model right like these are the things you should be paying attention to. And so if someone were to ask me, what should I pay attention to during free training? I would say almost everything they looked at. All the stuff. Right. Yeah. Yeah. Well, you know, we keep saying they, and I just want to reiterate AI by AI asks, who's the Allen and Allen Institute? Who's the benefactor? And it is Paul Allen, the co-founder of Microsoft, the late Paul Allen. He did sort of put the seed money down for this. And he also, I believe, has a building at the University of Washington named after him, and they sort of collaborate. It's in Seattle. So they sort of collabo pretty hard on making sure that people are pushing this forward. And they seem to be attracting some talent too, by the way. If anybody's looking for like a pretty sweet gig, that might be one. I know that I was on the website earlier. They are hiring. So super cool stuff there. What's the difference between the notebook you demoed and the Torch run script, Chris? When it comes down to it, it's just using the Huggy Face Transformers ecosystem. It's not just directly training with Torch. So PyTorch will just train it using pie torch using their script um it is different in the sense that uh if you so if you look into the script itself uh which is in the repo that uh that we we saw that's the omo repo um it is not using the same methods that we would to train using uh just the Transformers library as what it comes down to okay cool yeah uh Jason just check out the Olmo paper the Olmo paper I think is one of the first links we dropped then I want to go to Luna's question here Luna's talking about a smart rag model how to build a smart rag model for unstructured data structuring. Is that related to OMO in any way? How should we be thinking about that? Not related to OMO, but still cool. You know, if you're looking to do this, basically, there's a number of steps of smart rag. Hopefully, if you haven't, you should check out our video that we did with RCAI talking about DOM and smart RAG. But when it comes to unstructured data structuring, that is a pretty complex task. You'll see that, you know, it's kind of the big question right now, right? This is relating to like, how do we interact with complex PDFs? Or how do we take like, you know, web page and reconstruct that into some kind of meaningful structured data? Obviously, with HTML, it's a little bit easier than without. But yeah, yeah, that's yeah, yeah. Yeah, definitely. yeah yeah that's yeah yeah yeah yeah definitely um okay so uh juan the juan asks any new slash specific nuggets that you're getting from their repo from their paper about the pre-training process anything that uh that we sort of are drawing from that that's maybe unique i i don't know about like uh specifically unique you know but i did like that they had uh non-parametric layer norm because it is it is like a layer norm is a problem right like hopefully we can all agree especially when it comes to like, you know, quantization and representing very small values, layer norm is kind of a pain. So I need to look a lot deeper into specific reasoning into why they chose non-parametric in order to really understand it better. But I did think that was cool. Otherwise, it's, I mean, listen, it's Lama 2 with a fresh coat of open source paint. You know, it's still awesome. It's still dope. They still did everything that probably the meta team did for Lama 2 and the Mistral team did for Mistral 7B and everything. But at the end of the day, it's 32 by 32. Transformer architecture, Swiglu activation. I mean, you know the the novelty is okay one thing actually they released all their data cleaning scripts right so that is something that now you can just have and use uh data data cleaning ddupe and everything that comes with data cleaning is huge and it's a lot of effort and now it's slightly less effort and so um if you if you're like looking from a piece of code to take from the paper and you're interested in pre-training models uh which you if you're interested in it you'd be doing it it's very expensive and time consuming uh their their data cleaning script i think is the most valuable lines of code that come from this paper. Yeah. Yeah, yeah, yeah. Yeah, it's so cool. They've got the data. They've got the data cleaning script. They've got all the training loss curves. They've got all the checkpoints. Yeah, I mean, it's super cool. It's sort of a new year and a new day for AI engineering and for building LLM. So, Wiz, that's it for today on the Q&A. Thank you so much, man. And we'll go ahead and wrap up, everybody. Thanks for joining us. If you haven't liked and subscribed yet, please do. And join us next week to talk alignment with RLAIF. That's reinforcement learning with AI feedback. That's going to be the second part of our alignment series. We will be covering direct preference optimization in a few weeks out from that as well. And if you haven't joined our Discord community, we'd love to see you in there soon. And if you're interested in going deeper on any of the steps that it takes to build actual production LLM applications. Maybe you should check out our AI engineering boot camp. We've got a cohort starting next week and we'll be running cohorts throughout the year and updating with the latest and greatest content at the open source edge. Finally, please give us feedback on today. This was a special event that we saw the model come out. We put it together. We want to continue to do more special events. If you have great ideas or feedback on how we could improve, let us know. And as always, keep building, shipping, and sharing. And we'll see you back next week while we'll be doing the same in the meantime. Thanks everybody. See you soon.
AI2's OLMo (Open Language Model): Overview and Fine-Tuning
3,623
AI Makerspace
20240209
GPT-4 Summary: Unlock the secrets of OLMo, the first "truly open" Large Language Model (LLM) launched by The Allen Institute for AI (AI2) on February 1, 2024! Join us in an exclusive YouTube event where we explore the groundbreaking OLMo series, including its unique Dolma pretraining dataset and the sophisticated architectures of OLMo-1B, OLMo-7B, and OLMo-7B-Twin-2T models. Dive into a live demonstration of OLMo-7B's out-of-the-box performance using the Paloma benchmark, and witness a hands-on fine-tuning session utilizing the Quantized Low-Rank Adaptation (QLoRA) method with data from AI2’s open-instruct GitHub repository. This event is a must-watch for learners eager to grasp the essence of true openness in AI, AI engineers looking to leverage the latest in LLM technology, and practitioners keen on the nitty-gritty of the AI2 OLMo series. Don't miss this chance to be at the forefront of open-source AI innovation! Event page: https://lu.ma/olmoLM Have a question for a speaker? Drop them here: https://app.sli.do/event/nnX68PpbCpNogbB8DWJPuH Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/urhWQxtSmzmLsgWaA
2024-06-10T10:02:13.007039
https://www.youtube.com/watch?v=j2OAeeujQ9M
Hey this is Lance from Langchain. We seem very high interest in building LLM agents using open source LLMs and so we wanted to talk through how to do that from scratch using LLAMA3. So first what is an agent? So Lily and Wang has a very nice blog post that laid out the central components of agents being planning, memory, and tool use. So I want to walk through these components individually and how I can use them with LLAMA3. So first let's talk about tool use. I'm going to copy over some code and we're going to walk through it. So I want to walk through these components individually and how I can use them with Lama3. So first let's talk about tool use. I'm going to copy over some code and we're going to walk through it. So I have this notebook, done a few pip installs, set a few API keys. We'll use Grok as our LM. We'll use Tavoli for web search as one of our tools. And we'll use Lanksmith for tracing. But that's all I've done here. Okay. And I'm going to kind of have this image side by side so we can look at it. So first tool use. What's the big idea here? The big idea is simply this. I want to take an LLM, give it awareness of some external tool that exists, and have the LLM return the payload necessary to invoke that tool. That's really all that's going on. Now this is often kind of confused and I wanted to kind of zoom in and explain this exactly. So let's say I have a function called magic function which takes an input and adds two to it. I want to give an LLM the ability to recognize whether or not to invoke this function and to return the payload necessary to run the function given the user input. So here's exactly what I want to have happen. I want to take that function, somehow bind it to my LLM, and given an input, then return both the function name itself and the arguments necessary to run the function. Remember, LLMs are just string to string, right? It doesn't have the magic ability to call that function natively. But what it can do is return, okay, I've seen this function, I know it exists, and I'm going to give you exactly the input format necessary, the payload to run the function, as well as the name of the function. So that's really all that's going on. So first, this tool decorator in Langchain allows you to take any arbitrary function and just turn it into a tool. And let's just kick this off. So here's my magic function, and here's a web search function. So these are two things that I want to kind of turn into tools. And I can do that right here. So we can run this. Now if I look at magic function, now it's a structured tool. It has a name. It has a description. And it also has that input or arguments as captured as a pedantic schema. Okay. So all this information can be passed directly to our LLM. That's the key point. So this allows us to go from arbitrary functions to tools that can be bound to an LLM. Okay. So that's kind of step one. Now, step two, this is where things are kind of interesting. I'm going to use Grok here, and I'm going to use a prompt. I'm basically going to say, you're a helpful assistant with two tools web search and a custom function. Use web search for current events use the magic function if the user directly asks for it, otherwise just answer directly. So that's kind of my prompt. And let's test this in two cases to explain exactly how this works. So all I'm doing, I'm using chat grok, setting Lama 3, 70b and I'm creating this runnable. This is kind of a langchain primitive for basically invoking LLM. So that's all I've done. Now, here's what's interesting. This is piping the prompt to an LLM, and I've bound my tools to the LLM. So this is automatically taking those tools we defined, and it's basically giving them to the LLM so that it's aware of them. So that's basically represented in this red box here. You take external tools and you basically bind them to the LM so the LM is aware that they exist. That's kind of step one. Now here's step two. I can basically take a question. So I'm going to ask what is magic function three. I'm going to invoke my runnable or my chain, right, with this. And let's see what happens. I'm going to run this.nable or my chain with this. And let's see what happens. I'm going to run this. Now, here's what's interesting. That payload contains an object tool calls, which contains the name of the function and the arguments. That's it. So that's the key thing. And I can look at the raw payload as well. So the raw payload is just simply this AI message. It contains a bunch of information. But here's the main thing. It contains basically the name of the function to call and the arguments to pass to the function. So again, that's exactly represented here. All that's happening is I've taken a function, I've turned it into a tool, I've bound it to my LLM, I can ask a question in natural language, and the LLM can respond directly with the function to call or the tool to use and the input argument to use based upon the user input. That's the key point. And that's really all that's happening with function calling. That's all I need to know. Okay. So here's the other key thing. What if I just ask a question about the United States based on my prompt, it should not try to invoke any of these tools. Now let's test that. I run this. Good. And so this payload tool calls is empty. I can look at the raw payload. And yeah, now it's just a chat response, right? The capital of the US is Washington, DC. Great. Okay. So that's it. So hopefully now you understand how tool use works. And now remember, this requires an LLM that's actually been fine-tuned or prompted or otherwise is compatible with tool use. And this is a very important point. We talked to the folks at Kroc. They have kind of a proprietary implementation for how they do this, which we don't know fully. But it is reported that works very well with LLAMA-70B, LLAMA-370B. And in my experience, I've seen it to indeed work quite well. So in any case, the key point is this. I can take any arbitrary functions I want, I can turn them into tools, I can then pass those tools to an LLM, I can bind them, and then you can see right here, when I invoke my LLM with a question, the LLM makes the decision to use one of the tools, and if it does, it's going to return to you the name of the tool it wants to use and the input argument. That's the key point. Okay. So that is really what you need to know about tool use. Now we get to the fun stuff. We're gonna build the agent. And for this, I'm gonna use LandGraph. And I'm gonna explain kind of how this works over time. But first, the way to think about LandGraph is basically it's a way to lay out flows and flows in particular land graph are often characterized by cycles the ability to kind of do feedback and that's really relevant for agents and we'll explain why here shortly so land graph basically takes a state which can live over the course of your graph or flow and it can be accessed by all kind of what we're gonna call nodes in your graph okay so first, so first as state, I'm just going to find a set of messages. And don't worry too much about this for now. This will all make sense in about a minute. Okay, now here's where things are going to get interesting. I'm going to define an agent that contains two nodes. Okay, so first we're going to take our input. Again, it's a human message. We pass that to our LLM, which has the bound tools. The LLM is going to make a decision to use a tool or not, and we just walk through this. That's a step one. That's this thing we've already seen, right? Now what we're going to do in LandGraph is we're going to add basically what we're going to call a conditional edge. So this edge is going to, all it's going to do is say, was there a tool call or not? If there was a tool call, I'm going to route that over to a separate node that basically runs the tool. So let's walk through with our example we just did. What is magic function of three? The LLM made the decision to invoke the magic function, and it gave us the payload, right the payload right we just saw that so that's arguments input is three name is magic function those get plumbed over to what we're going to call tool node which actually invokes the necessary tool so it's going to basically take in this name magic function it's going to look up magic function itself and it's basically just going to run that function with this input payload, and then it's going to return that as a tool message to the LLM. That's all it's going to go on. LLM is going to see that tool message. It's going to make a decision about what to do next, and eventually this is going to keep running until there's a natural language response. And in this kind of toy example, the tool message would return with the result of five. That would be returned to the LLM. The LLM would see that and say, example, the tool message would return with the result of 5. That would be returned to the LM. The LM would see that and say, okay, the result is 5, and then you would exit. So that's like the toy example we want to see. Now we can implement this all in line graph really easily. And let's actually just talk through that quickly. I've copied over the code here. So all we've defined here is we have this assistant. So this is basically just wrapping the chain that we defined up here, this assistant runnable. We just wrap that. And basically all we're doing here is we're adding a retry. So basically if a tool is called, then we're good. That's valid. If it has meaningful text, we're good. But otherwise we do reprompt it. That's all we're doing here, right? We're just making sure that the LLM actually returned a valid response. So that's really all to worry about here and there. We're also creating this tool node. So this tool node basically just will try to invoke the tool, and it'll basically have a little, we're going to add a little thing to handle errors in the feedback. These are just like utility functions, so don't really worry too much about them. Now here's kind of the interesting bit. We're just going to build build the graph and it's going to look exactly like we show here so what we're going to do is we're going to add a node for our assistant right we're going to add a node for our tool node and that's kind of this piece and this piece that's our tool node and then we're going to add this conditional edge which is a tools condition, which is all it's going to be is this piece. It's basically going to take the result from the LM, is a tool called, if yes, go to the tool node, if no, end. And we can implement that right here. So this tools condition, that's all it's going to do. It's basically going to return either a tools invoked or end. And then we go from tools back to the assistant. Now let's run all this and we can see what's nice about LandGraph is we actually, it'll automatically lay this out as a graph for us. We can visualize it here. So what's going to happen is we're going to start, we're going to invoke our assistant. Our assistant will, in some cases, ask to use a tool. It'll then go to the tool node, the tool will be invoked,oked that'll return to the assistant and that will continue until there's natural language response and then we'll end that's it nice and easy so let's actually test this out um and i'm gonna go ahead let's ask a super simple question so let's look at what we're i've kind of two questions was magic function three and was the weather and sf let's ask question the first question What is magic function three and what is the weather NSF? Let's ask the first question. What's magic function three? Boom. So we're going to run this now. Now I'd like to go over to Langsmith and look at the result here. So let's actually just walk through this. This basically allows us to say we basically started. We went to our assistant. And these are the functions available to our assistant. So that's kind of, you know, we gave it magic function. We gave it web search. You know, here's the prompt. What's magic function three? And what we get as an output is, again, the function to use and the payload to pass to the function. So again, remember, this is kind of always a little bit of a confusing thing. An LLM can't magically call functions. An LLM is type string to string. It can return strings and it ingest strings. So that's fine. All it's going to return in this particular case is just the payload to run the function as well as the function name. But that's it. That's all the LLM is responsible for. Then what we need to do is we have this tools node, see that's here, that will then invoke our function. And so you can see the input is just the argument. The output is 3 plus 2, 5. Great. Now this goes back to our LLM. And then our LLM just simply sees, OK, it sees this tool message that the function was called. Here's the output of 5. And it returns natural language. The result of magic function is 5. And then we end. That's it. Nice and simple. And we can see that also kind of laid out here. Here's our human message. This is the AI message. So basically the AI makes the decision to invoke the tool and it gives you the input payload. Then here's the output tool message saying I ran the tool. Here's the output that LLM gets that back and basically gives you natural language. And then based upon our condition here, this tool's condition, if it's natural language, And then based upon our condition here, this tools condition, if it's natural language, it ends. If it's a tool invocation, it goes back to the tool node, right? So that goes to here. So in this regular case, it went back to the assistant and now it's a natural language response, which means we just end. That's it. So that's kind of a nice and simple example. Now, if we go, let's try something slightly more complicated. Let's try our other tools. Let's try, what's the weather in SF right now? We're going to try to run that. Cool, we can actually see that it's going to call our web search endpoint. That's great. It gets this kind of raw tool message back from the endpoint, and then the AI will synthesize that into, you know, the weather is 60 degrees right now with mist. Okay. So that's really it. This explains how you can lay out arbitrary agents with LLAMA3, open source LLM. We use chat grok to do that. Grok has been adapted for tool use, and that's the kind of main important thing you need to recognize that you need an LLM that actually has tool use enabled via prompting or fine-tuning or otherwise. And what you can see is, if we kind of go back to the diagram, what we've done here is we're using LineGraph to kind of orchestrate this process, and what's going to happen is you take a question in, our LLM makes the decision based on the question to invoke a tool, and then this conditional edge will determine, hey, if a tool is kind of invoked, then go to the tool node and actually execute the tool. The tool is executed. You get a tool message back with the tool output. Send that back to the LLM. LLM reasons again. And it could make a decision to call another tool, but in our particular case, in both cases, the tool message output was returned to the LLM. The LLM then responds in natural language. Here is the solution, and because of that, we end, and that's it. That's kind of how to build an agent from scratch using an open source LLM, LLM3, with a land graph to orchestrate it hopefully um from kind of kind of very simple components and first principles and again the key thing here really is the ability or the ability for an llm to reliably invoke tools so we talked through the case of adding two tools magic function and web search to our agent now let's say we want to make this a little bit more complicated and try some additional tools so replicate Replicate as a service allows you to access many different models, which is really convenient. And I'm going to go ahead and use it to augment Lama3 with a few multimodal capabilities. So all I've done is I've set my Replicate API key. So I've actually already done that. I've import Replicate. And I'm going to use a few different things here. So I'm going to do a text to cert, text to image tool, which is going to call this particular model, which is basically an OpenDolly model, which will go from text to image. I'm going to create, again, another tool, image to text, in this case, take an image in. It'll use a version of Lava to then produce text from the image, and text to speech. This is another option. So really all you need to do here is very simply just again, use this tool decorator with a function definition that invokes the model of choice. So now the question is how do we add these as tools to our agent? So again, it's kind of like before all we need to do is just update our tools list to include some of our new functions here. That's it. Pretty simple. Now that tools list is already bound to our agent here. So let's just go ahead and kind of rerun everything just to make sure this all works. And all I'm going to do here is just update my question list to include a few new questions that are related to my new tools and let's go ahead and try one so let's say i want to try um my index two questions so questions two and this is gonna be my question related to um image to uh this is gonna be text to image so let's basically say i'll kick this off and i'll go back and show you um so this is going to basically uh, hopefully invoke the text image tool based on this prompt, a yellow puppy running free with wild flowers in the mountains behind. So that's our prompt. We're going to pass it to our text image tool. And it looks like that has been called correctly, so that's great. Now we can also go over to Langsmith. I can check my projects here. Cool. Here's my agent. Here it is running. So we can also look at the trace to confirm that everything's working. So cool. So it looks like it is calling text image tool, so that's fantastic. That's running right now. Great. Our tool ran. Now we can check our image here. Look at that. Very nice. Again, this is just showing you the ability to create agents that have many different types of tools. Again, previously we only had covered two very simple tools, a magic function, web search, but we can actually do pretty interesting things. This actually shows how you can take replicate for example and basically invoke many different llms hosted by replicate or you know not just llms but different types of models so this is a text image model image of text and so forth text of speech basically to augment llama 3 and give it multimodal capabilities so in any case it's a really nice kind of illustration of the fact that capabilities. So in any case, it's a really nice kind of illustration of the fact that agents are very general and tools can be composed of many different kinds of things. In this particular case, different models through Replicate, which we can attach to LLAMA3 to augment its capabilities. Thanks.
Building open source LLM agents with Llama 3
1,059
LangChain
20240607
Agents combine tool use, memory, and planning to build systems that are capable of short- or long-term autonomous tasks. Here, we show how to build agents from scratch, using Llama 3 with tool calling (via Groq) and LangGraph. Check out our Llama 3 recipes here! https://github.com/meta-llama/llama-recipes/tree/main/recipes/use_cases/agents/langchain
2024-06-10T10:07:34.716582
https://www.youtube.com/watch?v=wd7TZ4w1mSw
Hi, this is Lance from Langchain. We're starting a new series called RAG from Scratch that's going to walk through some of the basic principles for RAG and kind of build up to advanced topics. that LLMs haven't seen all of the data that you may care about. So like private data or very recent data would not be included in the pre-training runs for these LLMs. And you can see here on the graph on the x-axis that the number of tokens that they're pre-trained on, which is of course very large, but of course it's still always going to be limited relative to private data that you care about or, example recent data but there's another interesting consideration is that lms have context windows that are actually getting increasingly large so you know coming going from like thousands of tokens to many thousands of tokens which represents you know dozens of pages up to hundreds of pages we can fit information into them from external sources. And a way to think about this is LLMs are kind of a kernel of a new kind of operating system. And connecting them to external data is kind of a very central capability in the development of this new emergent operating system. So retrieval augmented generation, or RAG, is a very popular kind of general paradigm for doing this which typically involves three stages. So the first stage is indexing some external documents so they can be easily retrieved based on an input query. So for example we ask a question, we retrieve documents that are relevant to that question, we feed those documents into an LLM in the final generation stage to produce an answer that's grounded in those retrieved documents. Now, we're starting from scratch, but we're going to kind of build up to this broader view of RAG. You can see here, there's a lot of interesting methods and tricks that kind of fan out from those three basic components of indexing, retrieval, and generation. And future videos are actually going to walk through those in detail. We're going to try to keep each video pretty short, like five minutes. But we're going to spend a lot of time on some of those more advanced topics. First, over the next three videos, I'll just be laying out the very basic kind of ideas behind indexing, retrieval, and generation, and then we'll kind of build beyond that into those more advanced themes. And now I want to show just a quick code walkthrough because we want to make these videos also a little bit interactive. So right here, and this repo will be shared, it's public. I have a notebook open, and I've just basically installed a few packages. And I've set a few environment variables for my Langsmith keys, which I personally do recommend. It's really useful for tracing observability, particularly when you're building RAG pipelines. So what I'm gonna show here is the code for our RAG quick start, which is linked here. And I'm gonna run this, but I'm then gonna kind of walk through everything that's going on. So actually, if we think back to our diagram, all we're doing here is we're loading documents. In this case, I'm loading a blog post. We're then splitting them. And we'll talk about that in future short videos on why splitting is important. But just for now, recognize we're splitting them or setting a chunk size of 1,000 characters. So we're splitting up our documents. Every split is embedded and indexed into this vector store. So we said we picked open AI embeddings, we're using Chromas, our vector storage runs locally. And now we define this retriever. We then have defined a prompt for RAG. We defined our LLM. We've done some minor document processing. We set up this chain, which will basically take our input question, run our retriever to fetch relevant documents, put the retrieved documents and our question into our prompt, pass it to the LLM, format the output as a string, and we can see here's our output. Now we can open up LangSmith and we can actually see how this ran. So here's our question and here's our output, and we can actually look here's our retriever, here's our retrieved documents, so that's pretty nice. And ultimately here was the prompt that we actually passed into the LLM. You're an assistant for QA tasks. Use the following pieces of retrieved content to answer the question. Here's our question. And then here's all the content. This we retrieved. And that drills in our answer. So this just gives a very general overview of how RAG works. And in future short videos, we're going to like break down each of these pieces in a lot more detail. Thanks.
RAG From Scratch: Part 1 (Overview)
313
LangChain
20240206
LLMs are a powerful new platform, but they are not always trained on data that is relevant for our tasks. This is where retrieval augmented generation (or RAG) comes in: RAG is a general methodology for connecting LLMs with external data sources such as private or recent data. It allows LLMs to use external data in generation of their output. This video series will build up an understanding of RAG from scratch, starting with the basics of indexing, retrieval, and generation. It will build up to more advanced techniques to address edge cases or challenges in RAG. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_1_to_4.ipynb Slides: https://docs.google.com/presentation/d/1C9IaAwHoWcc4RSTqo-pCoN3h0nCgqV2JEYZUJunv_9Q/edit?usp=sharing
2024-06-10T21:16:15.367167
https://www.youtube.com/watch?v=bjb_EMsTDKI
Hi, this is Lance from Langchain. This is the second video in our series Rag from Scratch focused on indexing. So in the past video, you saw the main kind of overall components of rag pipelines, indexing, retrieval, and generation. And here we're going to kind of deep dive on indexing and give just a quick overview of it. So the first aspect of indexing is we have some external documents that we actually want to load and put into what we're trying to call a retriever and the goal of this retriever is simply given an input question I want to fish out documents that are related to my question in some way. Now the way to establish that relationship or relevance or similarity is typically done using some kind of numerical representation of documents. And the reason is that it's very easy to compare vectors, for example, of numbers relative to, you know, just free-form text. And so a lot of approaches have been developed over the years to take text documents and compress them down into a numerical representation that then can be very easily searched. Now, there's a few ways to do that. where you take a document, you look at the frequency of words, and you build what they call sparse vectors, such that the vector locations are a large vocabulary of possible words. Each value represents the number of occurrences of that particular word. And it's sparse because there's, of course, many zeros. It's a very large vocabulary relative to what's present in the document. And there's very good search methods over this type of numerical representation. Now, a bit more recently, embedding methods that are machine learned, so you take a document and you build a compressed, fixed-length representation of that document have been developed with correspondingly very strong search methods over embeddings. So the intuition here is that we take documents and we typically split them because embedding models actually have limited context windows. So you know on the order of maybe 512 tokens up to 8000 tokens or beyond, but they're not infinitely large. So documents are split and each document is compressed into a vector and that vector captures a semantic meaning of the document itself. The vectors are indexed, questions can be embedded in exactly the same way, and then a numerical kind of comparison in some form, you know, using very different types of methods, can be performed on these vectors to fish out relevant documents relative to my question. And let's just do a quick code walkthrough on some of these points. So I have my notebook here. I've installed here. Now I've set a few API keys for Langsmith, which are very useful for tracing, which we'll see shortly. Previously, I walked through this kind of quick start that just showed overall how to lay out these RAG pipelines. And here what I'll do is I'll deep dive a little bit more on indexing, and I'm going to take a question and a document. And first, I'm just going to compute the number of tokens in, for example, the question. And this is interesting because embedding models and LLMs more generally operate on tokens. And so it's kind of nice to understand how large the documents are that I'm trying to feed in. In this case, it's obviously a very small, in this case, question. Now, I'm going to specify open eye embeddings. I specify an embedding model here. and I just say embed, embed query. I can pass my question, my document. And what you can see here is that runs, and this is mapped to now a vector of length 1536. And that fixed length vector representation will be computed for both documents, and really for any documents. So you're always kind of computing this fixed length vector that encodes the semantics of the text that you've passed. Now I can do things like cosine similarity to compare them. And as we'll see here, I can load some documents. This is just like we saw previously. I can split them, and I can index them here, just like we did before. But we can see under the hood, really what we're doing is we're taking each split, we're embedding it using OpenAI embeddings into this vector representation, and that's stored with a link to the raw document itself in our vector store. And next, we'll see how to actually do retrieval using this vector store.
RAG From Scratch: Part 2 (Indexing)
292
LangChain
20240206
This is the second video in our series on RAG. The aim of this series is to build up an understanding of RAG from scratch, starting with the basics of indexing, retrieval, and generation. This video focuses on indexing, covering the process of document loading, splitting, and embedding. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_1_to_4.ipynb Slides: https://docs.google.com/presentation/d/1MhsCqZs7wTX6P19TFnA9qRSlxH3u-1-0gWkhBiDG9lQ/edit?usp=sharing
2024-06-10T21:17:35.168263
https://www.youtube.com/watch?v=LxNVgdIz9sU
Hi, this is Lance from LangChain, and this is the third video in our series Rag from Scratch, building up a lot of the motivations for RAG from the very basic components. So we're going to be talking about retrieval today. In the last two short videos, I outlined indexing and gave kind of an overview of this flow, indexing and gave kind of an overview of this flow, which starts with indexing our documents, retrieval of documents relevant to our question, and then generation of answers based on the retrieved documents. And so we saw that the indexing process basically makes documents easy to retrieve. And it goes through a flow that basically looks like you take our documents, you split them in some way into these smaller chunks that can be easily embedded. Those embeddings are then numerical representations of those documents that are easily searchable and they're stored in an index. When given a question that's also embedded, the index performs a similarity search and returns splits that are relevant to the question. Now if we dig a little bit more under the hood, we can think about it like this. If we take a document and embed it, let's imagine that embedding just had three dimensions. So each document is projected into some point in this 3D space. Now the point is that the location in space is determined by the semantic meaning or content in that document. So to follow that then, documents in similar locations in space contain similar semantic information. And this very simple idea is really the cornerstone for a lot of search and retrieval methods that you'll see with modern vector stores so in particular we take our documents we embed them into this in this case a toy 3d space we take our question do the same we can then do a search like a local neighborhood search you can think about in this 3d space around our question to say hey hey, what documents are nearby? And these nearby neighbors are then retrieved because they have similar semantics relative to our question. And that's really what's going on here. So again, we took our documents, we split them, we embed them, and now they exist in this high dimensional space. We've taken our question, embedded it, projected in that same space. And we just do a search around the question for nearby documents and grab ones that are close. And we can pick some number. We can say we want one or two or three or N documents close to my question in this embedding space. And there's a lot of really interesting methods that implement this very effectively. I link one here. And there's a lot of really interesting methods that implement this very effectively. I link one here. And we have a lot of really nice integrations to play with this general idea. So many different embedding models, many different indexes, lots of document loaders, and lots of splitters that can be kind of recombined to test different ways of doing this kind of indexing retrieval. recombined to test different ways of doing this kind of indexing retrieval. So now I'll show a bit of a code walkthrough. So here we defined, we kind of had walked through this previously. This is our notebook. We've installed a few packages. We set a few environment variables using Langsmith. And we showed this previously. This is just an overview showing how to run RAG, like kind of end to end. In the last short talk, we went through indexing. And what I'm going to do very simply is I'm just going to reload our documents. So now I have our documents. I'm going to re-split them. And we saw before how we can build our index. Now here, let's actually do the same thing, but in the slides we actually showed kind of that notion of search in that 3D space. And a nice parameter to think about in building your retriever is k. So k tells you the number of nearby neighbors to fetch when you do that retrieval process. We talked about, you know, in that 3D space, do I want one nearby neighbor or two or three? So here we can specify K equals one, for example. Now we're building our index, so we're taking every split, embedding it, storing it. Now what's nice is I ask a question, what is task decomposition? This is related to the blog post. And I'm going to run get relevant documents. So I run that. And now how many documents do I get back? I get one as expected based upon k equals one. So this retrieved document should be related to my question. Now I can go to Langsmith. And we can open it up and we can look at our retriever. And we can see here is our question. Here's the one document we got back. And okay, so that makes sense this document pertains to task decomposition in particular and it kind of lays out a number of different approaches that can be used to do that this all kind of makes sense and this shows kind of in practice how you can implement this this name this kind of KNN or K-nearest neighbor search really easily just using a few lines of code. And next, we're gonna talk about generation. Thanks.
RAG From Scratch: Part 3 (Retrieval)
314
LangChain
20240206
This is the third video in our series on RAG. The aim of this series is to build up an understanding of RAG from scratch, starting with the basics of indexing, retrieval, and generation. This video focuses on retrieval, covering the process of document search using an index. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_1_to_4.ipynb Slides: https://docs.google.com/presentation/d/124I8jlBRCbb0LAUhdmDwbn4nREqxSxZU1RF_eTGXUGc/edit?usp=sharing
2024-06-10T21:18:24.893026
https://www.youtube.com/watch?v=JChPi0CRnDY
Hi, this is Lance from Langchain. Over the next few videos, we're going to be talking about query translation. And in this first video, we're going to cover the topic of multi-query. So query translation sits kind of at the first stage of an advanced RAG pipeline. And the goal of query translation is really to take an input user question and to translate it in some way in order to improve retrieval. So the problem statement is pretty intuitive. User queries can be ambiguous. And if the query is poorly written, because we're typically doing some kind of semantic similarity search between the query and our documents, if the query is poorly written or ill-posed, we won't retrieve the proper documents from our index. So there's a few approaches to attack this problem, and you can kind of group them in a few different ways. So here's one way I like to think about it. A few approaches has involved query rewriting. So taking a query and reframing it, like writing it from a different perspective, and that's what we're going to talk about a little bit here in depth using approaches like multi-query or ragfusion, which we'll talk about in the next video. You can also do things like take a question and break it down to make it less abstract, like into sub-questions, and there's a bunch of interesting papers focused on that, like Least to Most from Google. You can also take the opposite approach of take a question and make it more abstract. And there's actually an approach we're gonna talk about later in a future video called step-back prompting that focuses on like kind of a higher level question from the input. So the intuition, though, for this multi-query approach, we're taking a question and we're going to break it down into a few differently worded questions from different perspectives. And the intuition here is simply that it is possible that the way a question is initially worded, once embedded, it is not well aligned or in close proximity in this high dimensional embedding space to a document we want to retrieve that's actually related. kind of rewriting it in a few different ways, you actually increase the likelihood of actually retrieving the document that you really want to. Because of nuances in the way that documents and questions are embedded, this kind of more shotgun approach of taking a question, fanning it out into a few different perspectives, may improve and increase the reliability of retrieval. That's like the intuition, really. And, of course, we can combine this with retrieval so we can take our kind of fanned out questions, do retrieval on each one, and combine them in some way and perform RAG. So that's kind of the overview. And now let's go over to our code. So this is a notebook and we're going to share all this. So this is a notebook and we're going to share all this. We're just installing a few packages. We're setting a Langsmith API keys, which we'll see why that's quite useful here shortly. There's our diagram. Now first I'm going to index this blog post on agents. I'm going to split it. Well I'm going to load it. I'm going to split it and then I'm going to index it in Chroma locally. So this is a vector store. We've done this previously, so now I have my index defined. So here is where I'm defining my prompt for multi-query, which is your assistant. Your task is to basically reframe this question into a few different sub-questions. So there's our prompt right here. We'll pass that to an LLM, parse it into a string, and then split the string by new lines. And so we'll get a list of questions out of this chain. That's really all we're doing here. Now, all we're doing is here's a sample input question. There's our generate queries chain, which we defined. We're going to take that list and then simply apply each question to a retriever. So we'll do retrieval per question and this little function here is just going to take the unique union of documents across all those retrievals. So let's run this and see what happens. So we're going to run this and we're going to get some set of questions or documents back. So let's go to Langsmith now. We can actually see what happened under the hood. So here's the key point. We ran our initial chain to generate a set of reframed questions from our input. And here is that prompt. And here is that set of questions that we generated. Now what happened is for every one of those questions, we did an independent retrieval. That's what we're showing here. So that's kind of the first step, which is great. Now I can go back to the notebook and we can show this working end to end. So now we're going to take that retrieval chain. We'll pass it into context of our final rag prompt. We'll also pass through the question. We'll pass that to our rag prompt here, pass it to an LM, and then parse the output. Now let's kind of see how that works. So again, that's okay. There it is. So let's actually go into Langstaff and see what happened under the hood. So this is our final chain. This is great. We took our input question. We broke it down to these like five rephrase questions. For every one of those, we did a retrieval. That's all great. We then took the unique union of documents, and you can see in our final LLM prompt, answer the following question based on the context. This is the final set of unique documents that we retrieved from all of our sub questions. Here's our initial question. There's our answer. So that kind of shows you I can set this up really easily. I can use LankSmith to kind of investigate what's going on and in particular use LankSmith to investigate those intermediate questions that you generate in that like kind of question generation phase. And in future talks we're going to go through some of these other methods that we kind of introduced at the start of this one. Thank you.
RAG from scratch: Part 5 (Query Translation -- Multi Query)
369
LangChain
20240214
Query rewriting is a popular strategy to improve retrieval. Multi-query is an approach that re-writes a question from multiple perspectives, performs retrieval on each re-written question, and takes the unique union of all docs. Slides: https://docs.google.com/presentation/d/15pWydIszbQG3Ipur9COfTduutTZm6ULdkkyX-MNry8I/edit?usp=sharing Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb
2024-06-10T21:19:51.167785
https://www.youtube.com/watch?v=77qELPbNgxA
Hi, this is Lance from Langchain. This is the second video of our deep dive on query translation in our RAG from scratch series focused on the method called RAG fusion. So as we kind of showed before, query translation you can think of as the first stage in an advanced RAG pipeline. We're taking an input user question and we're translating in some way in order to improve retrieval. Now, we showed this general mapping of approaches previously. So again, you have kind of like rewriting. So you can take a question and kind of break it down into differently worded or different perspectives of the same question. So that's kind of rewriting. There's sub-questions where you take a question, break it down into smaller problems, solve each one independently. And then there's step-back where you take a question and kind of go more abstract, where you kind of ask a higher-level question as a precondition to answer the user question. So those are the approaches. And we're going to dig into one of the particular approaches for rewriting called rank fusion. Now, this is really similar to what we just saw with multi-query. The difference being we actually apply a kind of a clever ranking step of our retrieved documents, which we call reciprocal rank fusion. That's really the only difference. The input stage of taking a question, breaking it out into a few kind of differently worded questions, retrieval on each one is all the same. And we're going to see that in the code here shortly. So let's just hop over there and then look at this. So again, here's a notebook that we introduced previously. Here's the packages we've installed. We've set a few API keys for a langsmith, which you'll see why it's quite useful. And you can kind of go down here to our ragfusion section. And the first thing you'll note is what our prompt is. So it looks really similar to the prompt we just saw with multi-query. And simply, your helpful assistant that generates multiple search queries based on the user input. And here's the question output for queries. So let's define our prompt. And here is our query generation chain. Again, this looks a lot like we just saw. We take our prompt, plumb that into an LLM, and then basically parse by new lines, and that'll basically split out these questions into a list. That's all that's going to happen here. So that's cool. Now here's where the novelty comes in. Each time we do retrieval from one of those questions, we're going to get back a list of documents from our retriever. And so we do it over that. We generated four questions here based on our prompt. We do those over four questions. Well, like a list of lists, basically. Now reciprocal rank fusion is really well suited for this exact problem. We want to take this list of lists and build a single consolidated list. And really all that's going on is it's looking at the documents in each list and kind of aggregating them into a final output ranking and that's really the intuition around what's happening here so let's go ahead and so let's go ahead and look at that in some detail. So we can see we run retrieval. That's great. Now let's go over to Langsmith and have a look at what's going on here. Now let's go over to Langsmith and have a look at what's going on here. So we can see that here is our prompt, your helpful assistant that generates multiple search queries based on a single input. And here is our search queries. And then here are our four retrievals. So that's really good. So we know that all is working. And then those retrievals simply went into this rank function and are correspondingly ranked to a final list of six unique rank documents. That's really all we did. So let's actually put that all together into a full rag chain that's going to run retrieval, return that final list of rank documents, and pass it to our context, pass through our question, send that to our rag prompt, pass it to an LM, parse it to an output, and let's run all that together and see that working. Cool. So there's our final answer. Now let's have a look in LankSmith. We can see here was our four questions. Here's our retrievals. And then our final rag prompt plumbed through the final list of ranked six questions, which we can see laid out here, and our final answer. So this can be really convenient, particularly if we're operating across like maybe different vector stores, or we want to do like retrieval across a large number of kind of differently worded questions. This reciprocal rank fusion step is really nice. For example, if we wanted to only take the top three documents or something, it can be really nice to build that consolidated ranking across all these independent retrievals and pass that to the final generation. So that's really the intuition about what's happening here. Thanks.
RAG from scratch: Part 6 (Query Translation -- RAG Fusion)
342
LangChain
20240214
Query rewriting is a popular strategy to improve retrieval. RAG-fusion is an approach that re-writes a question from multiple perspectives, performs retrieval on each re-written question, and performs reciprocal rank fusion on the results from each retrieval, giving a consolidated ranking. Slides: https://docs.google.com/presentation/d/1EwykmdVSQqlh6XpGt8APOMYp4q1CZqqeclAx61pUcjI/edit?usp=sharing Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb Reference: https://github.com/Raudaschl/rag-fusion
2024-06-10T21:20:49.150492
https://www.youtube.com/watch?v=xn1jEjRyJ2U
Hi, this is Lance from LangChain. This is the fourth video in our deep dive on query translation in the Rag from Scratch series. And we're going to be focused on step back prompting. So query translation, as we said in some of the prior videos, kind of sits at the kind of first stage of kind of a RAG pipeline or flow. And the main aim is to take an input question and to translate it or modify it in such a way that it improves retrieval. Now, we talked through a few different ways to approach this problem. So one general approach involves rewriting a question, and we talked about two ways to do that rag fusion, multi query. And again, this is, this is really about taking a question and modifying it to capture a few different perspectives, which may improve the retrieval process. Now, another approach is to take a question and kind of make it less abstract, like break it down into sub questions, and then solve each of those independently. So that's what we saw with least to most prompting and a bunch of other variants kind of in that vein of sub-problem solving and then consolidating those solutions into a final answer. Now, a different approach presented by, again, Google as well, is step- back prompting so step back prompting can't take see the opposite approach where it tries to ask a more abstract question so the paper talks a lot about using few-shot prompting to produce what they call these step back or more abstract questions. And the way it does it is it provides a number of examples of step back questions, given your original question. So like, this is like, this is, for example, they're like, for prompt template, you're an expert in world knowledge, I ask you a question, your response should be comprehensive, not contradict with the following. You're an expert in world knowledge, I ask you a question, your response should be comprehensive, not contradict with the following. And this is kind of where you provide your like original and then step back. So here's like some example questions. So like, like, at year saw the creation of the region where the country is located. Which region of the country is the county of Hertford related? Janssen-Dill is born in what country? What is Janssen-Dill's personal history? So that's going to be a more intuitive example. So it's like you ask a very specific question about the country someone's born. The more abstract question is like, just give me the general history of this individual without worrying about that particular more specific question. So let's actually just walk through how this can be done in practice. So again, here's kind of like a diagram of the various approaches from less abstraction to more abstraction. Now, here is where we're formulating our prompt using a few of the few shot examples from the paper. So again, like input, yeah, something about like the police performing lawful arrests and what camp members of the police do. So like it basically gives the model a few examples. We basically formulate this into a prompt. That's really all that's going on here. Again, we repeat this overall prompt, which we saw from the paper. Your next spread of world knowledge, your test is to step back and paraphrase a question, generate a more generic step back question, which is easier to answer. Here's some examples. So it's like a very intuitive prompt. So, okay, let's start with the question, what is task composition for LLN agents? And we're going to say generate step back question. Okay, so this is pretty intuitive, right? What is the process of task composition? So like, not worrying as much about agents, but what is a process of task composition in general? And then hopefully that can be independently retrieved. We can independently retrieve documents related to the step back question. And in addition, retrieve documents related to the actual question and combine those to produce kind of final answer. So that's really all that's going on. And here's the response template where we're plumbing in the step-back context and our question context. And so what we're gonna do here is we're gonna take our input question and perform retrieval on that. We're also gonna generate our step-back question and perform retrieval on that. We're gonna plumb those into the prompt as here's our very, here's our basically a prompt keys, normal questions, step back question. And our overall question, again, we formulate those as a dict. We plumb those into our response prompt. And then we go ahead and attempt to answer our overall question. So we're going to run to answer our overall question. So we're gonna run that, that's running, and okay, we have our answer. Now I wanna hop over to Langsmith and attempt to show you kind of what that looked like under the hood. So let's see, let's go into each of these steps. So here was our prompt, right? You're an expert in world knowledge, your task is to step back and paraphrase a question. So here were our few-shot prompts. And this was our step-back question. So what is the process of task composition? Good. From the input, what is task composition for LLM agents? We perform retrieval on both. What is process task composition? And what is task composition for LLM agents? We perform both retrievals. We then populate our prompt with both. Original question, answer, and then here's the context retrieved from both the question and the step back question.'s our final answer so again this is kind of a nice technique probably depends on a lot of the types of like the type of domain you want to perform retrieval on but in some domains where for example there's a lot of kind of conceptual knowledge that underpins questions you expect users to ask. This step back approach could be really convenient to automatically formulate a higher level question to, for example, try to improve retrieval. I can imagine if you're working with like kind of textbooks or like technical documentation where you've made independent chapters focused on more high level kind of like concepts, and then other chapters on like more detailed, like implementations, this kind of like step back approach and independent retrieval could be really helpful. Thanks.
RAG from scratch: Part 8 (Query Translation -- Step Back)
418
LangChain
20240214
Step-back prompting is an approach to improve retrieval that builds on chain-of-thought reasoning. From a question, it generates a step-back (higher level, more abstract) question that can serve as a precondition to correctly answering the original question. This is especially useful in cases where background knowledge or more fundamental understanding is helpful to answer a specific question. Slides: https://docs.google.com/presentation/d/1L0MRGVDxYA1eLOR0L_6Ze1l2YV8AhN1QKUtmNA-fJlU/edit?usp=sharing Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb Reference: https://arxiv.org/pdf/2310.06117.pdf
2024-06-10T10:36:35.395680
https://www.youtube.com/watch?v=h0OPWlEOank
Hi, this is Lance from Langchain. This is our third video focused on query translation in the Rag from Scratch series. And we're going to be talking about decomposition. So query translation in general is a set of approaches that sits kind of towards the front of this overall rag pipeline. And the objective is to modify, rewrite, or otherwise decompose an input question from a user in order to improve retrieval. So we kind of talked through some of these approaches previously, in particular various ways to do query writing, like ragfusion and multiquery. There's a separate set of techniques that have become pretty popular and are really interesting for certain problems, which you might call like breaking down or decomposing an input question into a set of sub questions. So some of the papers here that are pretty cool are, for example, this work from Google. And the objective really is first to take an input question and decompose it into a set of sub-problems. So this particular example from the paper was the problem of last letter concatenation. And so it took the input question of three words, think machine learning, and broke it down into three sub-problems, think, think machine, think machine learning, as the the third subproblem. And then you can see in this bottom panel, it solves each one individually. So it shows, for example, in green, solving the problem, think machine, where you concatenate the last letter of K with the last letter of machine or the last letter of think K, lesser machine E, concatenate those to K E. And then for the overall problem, taking that solution and then basically building on it to get the overall solution of K E G. So that's kind of one concept of decomposing into subproblems, solving them sequentially. Now a related work called IRCOT or interleave retrieval combines retrieval with chain of thought reasoning. And so you can kind of put these together into one approach, which you can think of as kind of dynamically retrieval to solve a set of sub-problems, kind of that retrieval kind of interleaving with chain of thought as noted in the second paper, and a set of decomposed questions based on your initial question from the first work from google so really the idea here is we're taking one sub question we're answering it we're taking that answer and using it to help answer the second sub question and so forth so let's actually just walk through this in code to show how this might work so this is the notebook we've been working with from some of the other videos. You can see we already have a retriever defined up here at the top. And what we're going to do is we're first going to find a prompt that's basically going to say, given an input question, let's break it down to set up subproblems or subquestion which can be solved individually. So we can do that and this blog post is focused on agents. So let's ask a question about what are the main components of an LLM powered autonomous agent system. So let's run this and see what the decomposed questions are. So you can see the decomposed questions are, what is LLM technology and how does it work? What are its components? And then how the components interact. So it's kind of a sane way to kind of break down this problem into a few sub-problems which you might attack individually. Now here's where we define a prompt that very simply is gonna take our question, we'll take any prior questions we've answered, and we'll take our retrieval and basically just combine them. And we can define this very simple chain. Actually, let's go back and make sure our retriever is defined up at the top. So now we're building our retriever. Good, we have that now. So we can go back down here and let's run this. So now we are running. And what's happening is we're trying to solve each of these questions individually using retrieval and using any prior question answers. So okay, very good. Looks like that's been done. And we can see here's our answer. Now let's go over to Langtooth and actually see what happened under the hood. So here's what's kind of interesting and helpful to see. For the first question, so here's our first one. It looks like it just does retrieval, which is what we expect. And then it uses that to answer this initial question. Now for the second question should be a little bit more interesting because if you look at our prompt here's our question. Now here is our background available question answer pair so this was the answer question answer pair from the first question which we add to our prompt and then here's the retrieval for this particular question. So we're kind of building up the solution because we're pending the question answer pair from question one. And then likewise with question three, it should combine all of that. So we can look at here, here's our question, here's question one, here's question two, great. Now here's additional retrieval related to this particular question and we get our final answer. So that's like a really nice way you can kind of build up solutions using this kind of interleaved retrieval and concatenating prior question answer pairs. I do want to mention very briefly that we can also take a different approach where we can just answer these all individually and then just concatenate all those answers to produce a final answer. And I'll show that really quickly here. It's like a little bit less interesting maybe because you're not using answers from each question to inform the next one. You're just answering them all in parallel. This might be better for cases where it's not really like a sub-question decomposition, but maybe it's like a set of several independent questions whose answers don't depend on each other. That might be relevant for some problems. And we can go ahead and run. Okay, so this ran as well. We can look at our trace. And in this case, yeah, we can see that this actually just kind of concatenates all of our QA pairs to produce the final answer. So this gives you a sense for how you can use query decomposition and employ ideas from two different papers that are pretty cool. Thanks.
RAG from scratch: Part 7 (Query Translation -- Decomposition)
397
LangChain
20240219
Query decomposition is a strategy to improve question-answering by breaking down a question into sub-questions. These can either be (1) solved sequentially or (2) independently answered followed by consolidation into a final answer. Slides: https://docs.google.com/presentation/d/1O97KYrsmYEmhpQ6nkvOVAqQYMJvIaZulGFGmz4cuuVE/edit?usp=sharing Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb Reference: https://arxiv.org/abs/2205.10625 https://arxiv.org/pdf/2212.10509
2024-06-10T21:23:21.365004
https://www.youtube.com/watch?v=pfpIndq7Fi8
Hi, this is Lance from Langchain. This is the 10th video in our Rack from Scratch series focused on routing. So we talked through query translation, which is the process of taking a question and translating in some way. It could be decomposing it using step back prompting or otherwise. But the idea here was take our question, change it into a form that's better suited for retrieval. Now, routing is the next step, which is basically routing that potentially decomposed question to the right source. And in many cases, that could be a different database. So let's say in this toy example, we have a vector store, a relational DB, and a graph DB. But what we redo with routing is we simply route the question based upon the content of the question to the relevant data source. So there's a few different ways to do that. One is what we call logical routing. In this case, we basically give an LLM knowledge of the various data sources that we have at our disposal, and we let the LLM kind of reason about which one to apply the question to. So it's kind of like the LLM is applying some logic to determine which data source, for example, to use. Alternatively, you can use semantic routing, which is where you take a question, we embed it, and for example, we embed prompts. We then compute the similarity between our question and those prompts, and then we choose a prompt based upon the similarity. So the general idea is in our diagram, we talk about routing to, for example, a different database, but it can be very general. It can be routing to different prompt. It can be, you know, really arbitrarily taking this question and sending it different places, be it different prompts, be it different vector stores. So let's walk through the code a little bit. So you can see just like before, we've done a few pip installs, we've set up Langsmith, and let's talk through logical routing first. So in this toy example, let's say we had, for example, three different docs, like we had Python docs, we had JS docs, we had Golang docs. What we wanna do is take a question, route it to one of those three. So what we're actually doing is we're setting up a data model, which is basically going to be bound to our LLM and allow the LLM to output one of these three options as a structured object. So you really think about this as a classification. Classification plus function calling to produce a structured output, which is constrained to these three possibilities. So the way we do that is, let's just zoom in here a little bit. We can define like a structured object we want to get out from our LLM. Like in this case, we want, for example, you know, one of these three data sources to be output. We can take this and we can actually convert it into OpenAI, for example, function schema. And then we actually pass that in and bind it to our LLM. So what happens is we ask a question, our LLM invokes this function on the output to produce an output that adheres to the schema that we specify. So in this case, for example, we output like, you know, in this toy example, let's say we wanted like, you know, an output to be data source, vector store or SQL database. The output will contain a data source object and it'll be, you know, one of the options we specify as a JSON string. We also instantiate a parser from this object to parse that JSON string to an output like a pedantic object, for example. So that's just one toy example. And let's show one up here. So in this case, again, we had our three doc sources. We bind that to our LLM. So you can see we do with structured output, basically under the hood, that's taking that object definition, turning into function schema and binding that function schema to our LLM. And we call our prompt, you're an expert at routing user question based on, you know, programming language that you're referring to. So let's define our router here. Now we're going to do is we'll ask a question that is Python code. So we'll call that and now it's done. And you can see the object we get out is indeed, it's a route query object. So it's exactly, it adheres to this data model we've set up. And in this case, it's correct. So it's calling this Python doc. So we can extract that right here as a string. Now, once we have this, you can really easily set up like a route. So this could be like our full chain where we take this router, which is defined here, and then this choose route function can basically take that output and do something with it. So for example, if Python docs, this could then apply the question to like a retriever full of Python information or JS, same thing. So this is where you would hook basically that question up to different chains that are like, you know, retriever chain one for Python, retriever chain two for JS and so forth. of like the routing mechanism but this is really doing the heavy lifting of taking an input question and turning it into a structured object that restricts the output to one of a few output types that we care about in our like routing problem so that's really kind of the way this all hooks together now semantic routing is actually maybe a little bit more straightforward based on what we've seen previously so in that that case, let's say we have two prompts. We have a physics prompt, we have a math prompt. We can embed those prompts. No problem. We do that here. Now, let's say we have an input question from a user, like in this case, what is a black hole? We pass that through. We then apply this run of a lambda function, which is defined right here. What we're doing here is we're embedding the question. We're computing similarity between the question and the prompts. We're taking the most similar. And then we're basically choosing the prompt based on that similarity. And you can see, let's run that and try it out. And we're using the physics prompt. And there we go. Black holes, region region and space. So that just shows you kind of how you can use semantic routing to basically embed a question, embed for example, various prompts, pick the prompt based on semantic similarity. So that really gives you just two ways to do routing. One is logical routing with function calling. It can be used very generally. In this case, we applied it to like different coding languages, but imagine these could be swapped out for my Python, my vector store versus my graph DB versus my relational DB. And you could just very simply have some description of what each is. And then not only will the LLM do reasoning, but it'll also return an object that can be parsed very cleanly to produce like one of a few very specific types, which then you can reason over, like we did here, in your routing function. So that kind of gives you the general idea, and these are really very useful tools, and I encourage you to experiment with them. Thanks.
RAG from scratch: Part 10 (Routing)
422
LangChain
20240318
This is the 10th video in our RAG From Scratch series, focused on different types of query routing (logical and semantic). Notebook: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_10_and_11.ipynb Slides: https://docs.google.com/presentation/d/1kC6jFj8C_1ZXDYcFaJ8vhJvCYEwxwsVqk2VVeKKuyx4/edit?usp=sharing
2024-06-10T10:38:08.509087
https://www.youtube.com/watch?v=kl6NwWYxvbM
Hi, this is Lance from Langchain. This is the 11th part of our Rag from Scratch video series focused on query construction. So we've previously talked through query translation, which is the process of taking a question and converting it or translating it into a question that's better optimized for retrieval. Then we talked about routing, which is the process of taking that question and routing it to the right source, it a given vector store graph db or sql db for example now we're going to talk about the process of query construction which is basically taking natural language and converting it into particular domain specific language for one of these sources now we're going to talk specifically about the process of going from natural language to metadata filters for vector stores. The problem statement is basically this. Let's imagine we had an index of langchain video transcripts. You might want to ask a question. Give me, you know, find me videos on chat langchain published after 2024, for example. published after 2024, for example. The process of query structuring basically converts this natural language question into a structured query that can be applied to the metadata filters on your vector store. So most vector stores will have some kind of metadata filters that can do kind of structured querying on top of the chunks that are indexed. So for example, this type of query will retrieve all chunks that talk about the topic of chat langchain published after the date 2024. That's kind of the problem statement. And to do this, we're going to use function calling. In this case, you can use, for example, OpenAI or other providers to do that. And what we're going to do is, at a high level, take the metadata fields that are present in our vector store and provide them to the model as kind of information. And the model then can take those and produce queries that adhere to the schema provided. And then we can parse those out to a structured object, like a Pydentic object, which can then be be used in search so that's kind of problem statement and let's actually walk through code so here's our notebook which we've kind of gone through previously and I'll just show you as an example let's take an example YouTube video and let's look at the metadata that you get with the transcript so you can see you get stuff like description, URL, yeah, publish date, length, things like that. Now, let's say we had an index that had basically had a number of different metadata fields and filters that allowed us to do range filtering on like view count, publication date, the video length or unstructured search on contents and title. So those are kind of like the, imagine we had an index that had those kind of filters available to us. What we can do is capture that information about the available filters in an object. So we're calling that this tutorial search object kind of encapsulates that information about the available searches that we can do and so we basically enumerated here content search and title search or semantic searches that can be done over those fields and then these filters then are various types of structured searches we can do on like the length the view count and so forth and so we can just kind of build that object now we can set this up really easily with a basic simple prompt that says, you know, you're an expert, can bring natural language into database queries, you have access to the database tutorial videos, given a question, return a database query, optimize retrieval. So that's kind of it. Now, here's the key point, though. When you call this LLM with structured output, you're binding this pydantic object, which contains all the information about our index, to the LLM, which is exactly what we talked about previously. It's really this process right here. You're taking this object, you're converting it into a function schema, for example, OpenAI, you're binding that to your model, and then you're going to be able to get structured object object out versus a JSON string from a natural language question, which can then be parsed into a pydantic object, which you get out. So that's really the flow, and it's taking advantage of function calling, as we said. So if we go back down, we set up our query analyzer chain right here. Now let's try to run that just on a purely semantic input. So rag from scratch. Let's run that. And you can see this just does like a content search and a title search. That's exactly what we would expect. Now if we pass a question that includes like a date filter, let's just see if that would work. And there we go. So you kind of still get that semantic search, but you also get search over, for example, published date, earliest and latest published date, kind of as you would expect. Let's try another one here. So videos focus on the topic of chat lagging chain. They're published before 2024. This is just kind of rewrite of this question in slightly different way using a different date filter. And then you can see we can get content search, title search, and then we can get kind of a date search. So this is a very general strategy that can be applied kind of broadly to different kinds of querying you wanna do. It's really the process of going from an unstructured input to a structured query object out, following an arbitrary schema that you provide. And so as noted, really this whole thing we created here, this tutorial search is based upon the specifics of our vector store of interest. And if you wanna learn more about this, I linked to some documentation here that talks a lot about different types of integrations we have with different vector store providers to do exactly this. So it's a very useful trick. It allows you to do kind of query metadata filtering on the fly from a natural language question. It's a very convenient trick that works with many different vector DBs. So encourage you to play with it. Thanks.
RAG from scratch: Part 11 (Query Structuring)
359
LangChain
20240327
Our RAG From Scratch video series walks through impt RAG concepts in short / focused videos w/ code. Problem: We interact w/ databases using domain-specific languages (e.g., SQL, Cypher for Relational and Graph DBs). And, many vectorstores have metadata that can allow for structured queries to filter chunks. But RAG systems ingest questions in natural language. Idea: A great deal of work has focused on query structuring, the process of text-to-DSL where DSL is a domain specific language required to interact with a given database. This converts user questions into structured queries. Below are links that dive into text-to-SQL/Cypher, and the below video overviews query structuring for vectorstores using function calling. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_10_and_11.ipynb References: 1/ Blog with links to various tutorials and templates: https://blog.langchain.dev/query-construction/ 2/ Deep dive on graphDBs (c/o @neo4j): https://blog.langchain.dev/enhancing-rag-based-applications-accuracy-by-constructing-and-leveraging-knowledge-graphs/ 3/ Query structuring docs: https://python.langchain.com/docs/use_cases/query_analysis/techniques/structuring 4/ Self-query retriever docs: https://python.langchain.com/docs/modules/data_connection/retrievers/self_query
2024-06-10T10:39:19.350164
https://www.youtube.com/watch?v=gTCU9I6QqCE
Hi, this is Lance from Langchain. I'm going to talk about indexing and multi-refermentation indexing in particular for the 12th part of our Rag from Scratch series here. So we previously talked about a few different major areas. We talked about query translation, which takes a question and translates it in some way to optimize for retrieval. We talked about routing, which is the process of taking a question, routing it to the right data source, be it a vector store, GraphDB, SQLDB. We talked about query construction. We dug into basically query construction for vector stores, but of course there's also text to SQL, text to Cypher. So now we're going to talk about indexing a bit. In particular, we're going to talk about indexing techniques for vector stores. And I want to highlight one particular method today called multi-representation indexing. So the high-level idea here is derived a bit from a paper called Proposition Indexing, which kind of makes a simple observation. You can think about decoupling raw documents and the units you use for retrieval. So in the typical case, you take a document, you split it up in some way to index it, and then you embed the split directly. This paper talks about actually taking a document, splitting it in some way, but then using an LLM to produce what they call a proposition, which you can think of as like kind of a distillation of that split. So it's kind of using an LLM to modify that split in some way to distill it or make it like a CRISPR, like summary, so to speak, that's better optimized for retrieval. So that's kind of one highlight, one piece of intuition. So we've actually taken that idea and we've kind of built on it a bit in kind of a really nice way that I think is very well suited, actually, for long context LLMs. So the idea is pretty simple. You take a document and you actually distill it or create a proposition like they show in the prior paper. I kind of typically think of this as just produce a summary of the document and you embed that summary. So that summary is meant to be optimized for retrieval. So it might contain a bunch of keywords from the document or like the big ideas, such that when you embed the summary, you embed a question, you do search, you basically can find that document based upon this highly optimized summary for retrieval. So that's kind of represented here in your VectorStore. But here's the catch. You independently store the raw document in a doc store. And when you basically retrieve the summary in the VectorStore, you return the full document for the LLM to perform generation, and this is a nice trick because at generation time, now with long-conduct LLM, for example, the LLM can handle that entire document. You don't need to worry about splitting it or anything. You just simply use the summary to create a really nice representation for fishing out that full doc. Use that full doc in generation. There might be a lot of reasons you want to do that. You want to make sure the LM has the full context to actually answer the question. So that's the big idea. It's a nice trick. And let's walk through some code here. So we have a notebook all set up, just like before. We've done some pip installs. Set some API keys here for langsmith um kind of here's a diagram now let me show an example let's just load two different blog posts uh one's about agents one is about uh you know human data quality um and what we're going to do is let's create a summary of each of those so this is kind of the first step of that process where we're going from like the raw documents to summaries. Let's just have a look and make sure those ran. So okay, cool. So the first doc discusses building autonomous agents. The second doc contains the importance of high quality human data and training. Okay, so that's pretty nice. We have our summaries. Now we're going to go through a process that's pretty simple. First, we define a vector store that's going to index those summaries. Now we're going to define what we call our document storage. You're going to store the full documents. So this multi-vector retriever kind of just pulls those two things together. We basically add our doc store. We add this byte store as basically the full document store. The vector store is our vector store. And now this ID is what we're going to use to reference between the chunks or the summaries and the full documents. That's really it. So now for every document, we'll define a new doc ID. And then we're basically going to take our summary documents. And we're going to extract for each of our summaries we're going to get associated doc id so there we go um so let's go ahead and do that so we have our summary docs which we add to the vector store we have our full documents uh our doc ids and the full raw documents which are added to our doc store. And then let's just do a query, a vector store, like a similarity search on our vector store. So memory and agents, and we can see, okay, so we can extract, you know, from the summaries, we can get, for example, the summary that pertains to agents. That's a good thing. Now let's go ahead and run a query, get relevant documents on our retriever, which basically combines the summaries, which we use for retrieval, then the doc store, which we use to get the full doc back. So we're gonna apply our query. We're gonna basically run this. And here's the key point. We've gotten back the entire article. And we can actually, if you want to look at the whole thing, we can just go ahead and do this. Here we go. So this is the entire article that we get back from that search. So it's a pretty nice trick. Again, we query with just memory and agents. And we can kind of go back to our diagram here. We create for memory and agents. It starts our summaries. It found. We create for memory and agents. It starts our summaries. It found the summary related to memory and agents. It uses that doc ID to reference between the vector store and the doc store. It fishes out the right full doc, returns us the full document, in this case, the full web page. That's really it. Simple idea, nice way to go from basically like nice, simple proposition style or summary style indexing to full document retrieval, which is very useful, especially with long context LLMs. Thank you.
RAG from scratch: Part 12 (Multi-Representation Indexing)
395
LangChain
20240328
Our RAG From Scratch video series walks through impt RAG concepts in short / focused videos w/ code. This is the 12th video in our series and focuses on some useful tricks for indexing full documents. Problem: Many RAG approaches focus on splitting documents into chunks and returning some number upon retrieval for the LLM. But chunk size and chunk number can be brittle parameters that many user find difficult to set; both can significantly affect results if they do not contain all context to answer a question. Idea: Proposition indexing (@tomchen0 et al) is a nice paper that uses an LLM to produce document summaries ("propositions") that are optimized for retrieval. We've built on this with two retrievers: (1) multi-vector retriever embeds summaries, but returns full documents to the LLM. (2) parent-doc retriever embeds chunks but returns full documents to the LLM. Idea is to get best of both worlds: use smaller / concise representations (summaries or chunks) to retrieve, but link them to full documents / context for generation. The approach is very general, and can be applied to tables or images: in both cases, index a summary but return the raw table or image for reasoning. This gets around challenges w/ directly embedding tables or images (multi-modal embeddings), using a summary as a representation for text-based similarity search. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_12_to_14.ipynb References: 1/ Proposition indexing: https://arxiv.org/pdf/2312.06648.pdf 2/ Multi-vector: https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector 3/ Parent-document: https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever 4/ Blog applying this to tables: https://blog.langchain.dev/semi-structured-multi-modal-rag/ 5/ Blog applying this to images w/ eval: https://blog.langchain.dev/multi-modal-rag-template/
2024-06-10T10:41:07.246638
https://www.youtube.com/watch?v=z_6EeA2LDSw
Hi this is Lance from Langchain. This is the 13th part of our Rag from Scratch series focused on a technique called Raptor. So Raptor sits within kind of an array of different indexing techniques that can be applied on vector stores. We just talked about multi representation indexing. I provided a link to a video that's very good talking about the different means of chunking, so I encourage you to look at that. And we're going to talk today about a technique called Raptor, which you can kind of think of it as a technique for hierarchical indexing. So the high-level intuition is this. Some questions require very detailed information from a corpus to answer, like pertain to a single document or single chunk. So like we can call those low-level questions. Some questions require consolidation across kind of broad swaths of a document. So across like many documents or many chunks within a document. And you can call those like higher level questions. And so there's kind of this challenge in retrieval and that typically we do like k nearest neighbors retrieval like we've been talking about you're fishing out some number of chunks but what if you have a question that requires information across like five six you know or a number of different chunks which may exceed you know the k parameter in your retrieval so again we, when you typically do retrieval, you might set a k parameter of 3, which means you're retrieving 3 chunks from your vector store. And maybe you have a very high-level question that could benefit from infiltration across more than 3. So this technique called Raptor is basically a way to build a hierarchical index of document summaries. And the intuition is this. You start with a set of documents as your leafs here on the left, you cluster them, and then you summarize each cluster. So each cluster of similar documents will consolidate information from across your context, which is, you know, your context could be a bunch of different splits, or it could even be across a bunch of different documents, you're basically capturing similar ones and you're consolidating the information across them in a summary. And here's the interesting thing, you do that recursively until either you hit like a limit or you end up with one single cluster that's a kind of very high-level summary of all of your documents. And what the paper shows is that if you basically just collapse all these and index them together as a big pool, you end up with a really nice array of chunks that span the abstraction hierarchy. Like you have a bunch of chunks from individual documents that are just like more detailed chunks pertaining to that, you know, single document, but you also have chunks from the summaries, or I would say like, you know, maybe not chunks, but in this case, the summary is like a distillation. So, you know, raw chunks on the left that represent your leaves are kind of like the raw form of information, either raw chunks or raw documents. And then you have these higher level summaries, which are all indexed together. So if you have higher level questions, they should basically be more similar in semantic search for example to these higher level summary chunks if you have lower level questions then they'll retrieve these more lower level chunks and so you have better semantic coverage across like the abstraction hierarchy of question types that's the intuition they do a bunch of nice studies to show that this works pretty well um I actually did a deep dive video just on this, which I link below. I did want to cover it briefly, just at a very high level. So let's actually just do kind of a code walkthrough. And I've added it to this React from Scratch course notebook, but I link over to my deep dive video, as well as the paper and the full code notebook, which is already checked in and is discussed at more length in the deep dive. The technique is a little bit detailed, so I only want to give you a very high-level kind of overview here, and you can look at the deep dive video if you want to go in more depth. Again, we talked through this abstraction hierarchy. I apply this to a large set of langchain documents um so this is me loading basically all of our langchain expression language docs this is on the order of 30 documents you can see i do a histogram here of the token counts per document some are pretty big most are fairly small less than less than 4,000 tokens. And what I did is I indexed all of them individually. So all those raw documents you can kind of imagine are here on the left. And then I do embedding, I do clustering, summarization, and I do that recursively until I end up with, in this case, I believe I only set like three levels of recursion, and then I save them all in my vector store. So that's like the high level idea. I'm applying this Raptor technique to a whole bunch of line chain documents that have fairly large number of tokens. So I do that. And yeah, I use, actually I use both Cloud as well as OpenAI here. This talks through the clustering method that they use, which is pretty interesting. You can kind of dig into that on your own if you're really interested. This is a lot of their code, which I cite accordingly. This is basically implementing the clustering method that they use. And this is just simply the document embedding stage. This is like basically embedding and clustering. That's really it. Some text formatting, summarizing of the clusters right here. And then this is just running that whole process recursively. That's really it. This is tree building. So basically I have the raw docs. Let's just go back and look at doc texts. So this should be all my raw documents. So that's right. You can see it here. Doc text is basically just the text and all those documents that i pulled um and so i run this process on them right here uh so this is that recursive embedding cluster basically runs and produces that tree here's the results um this is me just going through the results and basically adding the result text to this list of texts. Okay, so here's what I do. This leaf text is all the raw documents and I'm appending to that all the summaries. That's all that's going on. And then I'm indexing them all together. That's the key point, rag chain, and there you have it. That's really all you do. So anyway, I encourage you to look at this in depth. It's a pretty interesting technique. It works well with long contexts. So for example, one of the arguments I made is that it's kind of a nice approach to consolidate information across like a span of large documents. Like in this particular case, my individual documents were language and expression language docs, each being somewhere in the order of, in this case, most of them are less than 4,000 tokens, some pretty big, but I index them all, I cluster them without any splits, embed them, cluster them, build this tree, and go from there. And it all works because we now have LLMs that can go out to 200,000, up to a million tokens in context. So you can actually just do this process for big swaths of documents in place without any splitting. It's a pretty nice approach. So I encourage you to think about it, look at it, watch the deep dive video if you really wanna go deeper on this. Thanks.
RAG From Scratch: Part 13 (RAPTOR)
460
LangChain
20240329
Our RAG From Scratch video series walks through impt RAG concepts in short / focused videos w/ code. Problem: RAG systems need to handle "lower-level" questions that reference specific facts found in a single document or "higher-level" questions that distill ideas that span many documents. Handling both types of questions can be a challenge with typical kNN retrieval where only a finite number of doc chunks are retrieved. Idea: RAPTOR (@parthsarthi03 et al) is a paper that addresses this by creating document summaries that capture higher-level concepts. It embeds and clusters documents, and then summarizes each cluster. It does this recursively, producing a tree of summaries with increasingly high-level concepts. The summaries and starting docs are indexed together, giving coverage across user questions. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_12_to_14.ipynb https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb References: 1/ Paper: https://arxiv.org/pdf/2401.18059.pdf 2/ Longer deep dive: https://www.youtube.com/watch?v=jbGchdTL7d0
2024-06-10T10:43:12.812925
https://www.youtube.com/watch?v=cN6S0Ehm7_8
Hi, this is Lance from Langchain. This is the 14th part of our Rag from Scratch series. I'm going to be talking about an approach called Cold Bear. So, we've talked about a few different approaches for indexing. And just as kind of a refresher, indexing falls kind of right down here in our flow. We started initially with query translation, taking a question, translating it in some way to optimize retrieval. We talked about routing it to a particular database. We then talked about query construction, so going from natural language to the DSL or domain-specific language for any of the databases that you want to work with. Those are metadata filters for vector stores or cipher for graph DB or SQL for relational DB. So that's kind of flow we talked about today. We talked about some indexing approaches like multi-representation indexing. We gave a small shout out to Greg Cameron in his years on chunking. We talked about hierarchical indexing. I want to include one advanced embedding approach. So we talked a lot about embeddings are obviously very central to semantic similarity search and retrieval. So one of the interesting points that's been brought up is that embedding models, of course, take a document, you can see here on the top, and embed it, basically compress it to a vector. So it's kind of a compression process. You're representing all the semantics of that document in a single vector. You're doing the same to your question, you're doing a similarity search between the question embedding and the document embedding in order to perform retrieval. You're typically taking the, you know, K most similar document embedding is given a question, and that's really how you're typically taking the you know k most similar um document embedding is given a question and that's really how you're doing it now a lot of people said well hey the compressing a full document with all this nuance to a single vector seems a little bit um overly restrictive right and this is a fair question to ask um there's been some interesting approaches to try to address that. And one is this approach method called Colbert. So the intuition is actually pretty straightforward. There's a bunch of good articles I linked down here. This is my little cartoon to explain it, which I think is hopefully kind of helpful. But here's the main idea. Instead of just taking a document and compressing it down to a single vector, basically a single what we might call an embedding vector, we take the document, weing it down to a single vector, basically a single what we might call embedding vector. We take the document, we break it up into tokens. So tokens are just like units of content. It depends on the token areas you use. We talked about this earlier. So you basically tokenize it. And you produce basically an embedding or a vector for every token. And there's some kind of positional weighting that occurs when you do this process. So obviously you look at the implementation to understand the details, but the intuition is that you're producing some kind of representation for every token. Okay? And you're doing the same thing for your questions. You're taking your question, you're breaking it into tokens, and you have some representation or vector per token and then what you're doing is for every token in the question you're computing the similarity across all the tokens in the document and you're finding the max you're taking the max you're storing that and you're doing that process for all the tokens in the question. So again, token two, you compare it to every token in the document, compute the max. And then the final score is in this case the sum of the max similarities between every question token and any document token. So it's an interesting approach. It reports very strong performance. Latency is definitely a question. So kind of production readiness is something you should look into. But it's an approach that's worth mentioning here because it's pretty interesting. And let's walk through the code. So there's actually a really nice library called Rakatouille, which makes it very easy to play with Colbert. So you just pip install it here. I've already done that. And we can use one of their pre-trained models to mediate this process. So I'm basically following their documentation. This is kind of what they recommended. So I'm running this now. Hopefully, this runs somewhat quickly. I'm not sure. I previously have loaded this model, so hopefully it won't take too long. And, yeah, you can see it's pretty quick. I'm on a Mac M2 with 32 gigs. So just as like a context in terms of my system. This is from their documentation. We're just grabbing a Wikipedia page. This is getting a full document on Miyazaki. So that's cool. We're going to grab that. Now this is just from their docs. This is basically how we create an index. So we provide the, you know, some index name, the collection, the dot max document length. And yeah, you should look at their documentation for these flags. These are just the default. So I'm going to create my index. So I get some logging here. So it's working under the hood. And by the way, I actually have their documentation open, so you can kind of follow along. Let's see, yeah, right about here. So you can kind of follow this indexing process, it's great and index, so you load a trained model. This can be either your own pre-trained model or one of ours from the hub. And this is kind of the process we're doing right now. Create index is just a few lines of code, and this is exactly what we're doing. So this is my documents, and this is the indexing step that we just kind of walked through. And it looks like it's done. So you get a bunch of logging here. That's fine. Now let's actually see if this works. So we're going to run drag search. that's fine. Now let's actually see if this works. So we're gonna run drag search, what animation studio did Miyazaki found, set our k parameter, and we get some results. Okay so it's running and cool we get some documents out. So you know it seems to work. Now what's nice is you can run this within linechain as a linechain retriever. So that basically wraps this as a linechain retriever and then you can use it freely as a Retriever within Langchain. It works with all the other different LLMs and all the other components like re-rankers and so forth that we talked through. So you can use this directly as a Retriever. Let's try this out. And boom, nice and fast. And we get our documents. Again, this is a super simple test example. You should run this maybe on more complex cases. But it's a pretty easy spin-up. up it's really interesting alternative indexing approach um using again like we talked through um a very different algorithm for computing doc similarity that may work better i think an interesting regime to consider this would be longer documents so if you want like longer um yeah if you basically want kind of long context embedding i think you should look into for example the max token limits for this approach because it partitions the document into into each token um i would be curious to dig into kind of what the overall context limits are for this approach of colbert but it's really interesting to consider and it reports very strong performance so again i encourage you to play with it and this is just kind of an intro to how to get set up and to start experimenting with it really quickly thanks
RAG From Scratch: Part 14 (ColBERT)
432
LangChain
20240330
Our RAG From Scratch video series walks through impt RAG concepts in short / focused videos w/ code. This is the 14th video in our series and focuses on indexing with ColBERT for fine-grained similarity search. Problem: Embedding models compress text into fixed-length (vector) representations that capture the semantic content of the document. This compression is very useful for efficient search / retrieval, but puts a heavy burden on that single vector representation to capture all the semantic nuance / detail of the doc. In some cases, irrelevant / redundant content can dilute the semantic usefulness of the embedding. Idea: ColBERT (@lateinteraction & @matei_zaharia) is a neat approach to address this with higher granularity embeddings: (1) produce a contextually influenced embedding for each token in the document and query. (2) score similarity between each query token and all document tokens. (3) take the max. (4) do this for all query tokens. (5) take the sum of the max scores (in step 3) for all query tokens to get the similarity score. This results in a much more granular token-wise similarity assessment between document and query, and has shown strong performance. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_12_to_14.ipynb References: 1/ Paper: https://arxiv.org/abs/2004.12832 2/ Nice review from @DataStax: https://hackernoon.com/how-colbert-helps-developers-overcome-the-limits-of-rag 3/ Nice post from @simonw: https://til.simonwillison.net/llms/colbert-ragatouille 4/ColBERT repo: https://github.com/stanford-futuredata/ColBERT 5/ RAGatouille to support RAG w/ ColBERT: https://github.com/bclavie/RAGatouille
2024-06-10T10:44:28.022331
https://www.youtube.com/watch?v=Vw52xyyFsB8
Hi, this is Lance from Langchain. This is the fourth short video in our Rag from Scratch series that's going to be focused on generation. Now in the past few videos we walked through the general flow for kind of basic rag, starting with indexing, followed by retrieval, then generation of an answer based upon the documents that we retrieved that are relevant to our question. This is kind of the very basic flow. Now, an important consideration in generation is really what's happening is we're taking the documents we retrieve and we're stuffing them into the LLM context window. So if we kind of walk back through the process, we take documents, we split them for convenience or embedding. We then embed each split and we store that in a vector store as this kind of easily searchable numerical representation or vector. And we take a question, embed it to produce a similar kind of numerical representation. We can then search, for example, using something like it to produce a similar kind of numerical representation. We can then search, for example, using something like K and N in this kind of high-dimensional space for documents that are similar to our question based on their proximity or location in this space. In this case, you can see 3D as a toy, kind of toy example. Now we've recovered relevant splits to our question. We pack those into the context window, and we produce our answer. Now this introduces the notion of a prompt. So the prompt is kind of a placeholder that has, for example, you know in our case, keys. So those keys can be like context and question. So there basically are like buckets that we're going to take those retrieved documents and slot them in. We're going to take our question and also slot it in. And if you kind of walk through this flow, you can kind of see that we can build like a dictionary from our retrieved documents and from our question. And then we can basically populate our prompt template with the values from the dict and then becomes a prompt value which can be passed to an lm like a chat model resulting in chat messages which we then parse into a string and get our answer so that's like the basic workflow that we're going to see and let's just walk through that in code very quickly to kind of give you like a hands-on intuition so we had our notebook we walked through previously install a few packages i'm setting a few langsmith environment variables we'll see it's it's nice for uh kind of observing and debugging our traces um previously we did this quick start we're going to skip that over um and what i will do is i'm going to build our retriever And what I will do is I'm going to build our retriever. So again, I'm going to take documents and load them. And then I'm going to split them here. We've kind of done this previously, so I'll go through this kind of quickly. And then we're going to embed them and store them in our index. So now we have this retriever object here. Now I'm going to jump down here. Now here's where it's kind of fun. This is the generation bit. And you can see here I'm defining something new. This is a prompt template. And my prompt template is something really simple. It's just going to say answer the following question based on this context. It's going to have this context variable and a question. So now I'm building my prompt. So great. Now I have this prompt. Let's define an LLM. I'll choose 3, 5. Now this introduces the notion of a chain. So in Langchain we have an expression language called LCEL, Langchain Expression Language, which lets you really easily compose things like prompts, LLMs, parsers, retrievers, and other things. But the very simple kind of example here is just let's just take our prompt, which we defined right here, and connect it to an LL our prompt which you defined right here and connect it to an lm which you'd find right here into this chain so there's our chain now all we're doing is we're invoking that chain so every language expression language chain has a few common methods like invoke bat stream in this case we just invoke it with a dict so context So, context in question, that maps to the expected keys here in our template. And so, if we run invoke, what we see is it's just going to execute that chain and we get our answer. Now, if we zoom over to Langsmith, we should see that it's been populated. So, yeah, we see a very simple runnable sequence. Here was our document. And here's our output, and here is our prompt. Answer the following question based on the context. Here's the document we passed in, here is the question, and then we get our answer. So that's pretty nice. Now there's a lot of other options for RAG prompts. I'll pull one in from our prompt hub. This one's like kind of a popular prompt. So just like has a little bit more detail, but you know, it's the main, the main intuition is the same. You're passing in documents, you're asking the alumni to reason about the documents, given a question, produce an answer. And now here I'm going to find a rag chain, which will automatically do the retrieval for us. And all I have to do is specify, here's my retriever, which we defined before. Here's our question, which we invoke with. The question gets passed through to the key question in our dict, and it automatically will trigger the retriever, which will return documents, which get passed into our context. So it's exactly what we did up here, except before we did this manually. And now this is all kind of automated for us. We pass that DIC, which is auto-populated, into our prompt, LM, output parser, now let's invoke it. And that should all just run. And great, we get an answer, and we can look at the trace, and we can see everything that happened. So we can see our retriever was run. These documents were retrieved. They get passed into our LLM, and we get our final answer. So this is kind of the end of our overview, where we talked about, I'll go back to the slides here quickly. We talked about indexing, retrieval, and now generation. In follow-up short videos, we'll kind of dig into some of the more complex or detailed themes that address some limitations that can arise in this very simple pipeline. Thanks.
RAG From Scratch: Part 4 (Generation)
385
LangChain
20240206
This is the fourth video in our series on RAG. The aim of this series is to build up an understanding of RAG from scratch, starting with the basics of indexing, retrieval, and generation. This video focuses on generation, covering the process of RAG prompt construction and passing the prompt to an LLM for answer generation. Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_1_to_4.ipynb Slides: https://docs.google.com/presentation/d/1eRJwzbdSv71e9Ou9yeqziZrz1UagwX8B1kL4TbL5_Gc/edit?usp=sharing
2024-06-10T10:47:07.191648
https://www.youtube.com/watch?v=SaDzIVkYqyY
Hi, this is Lance from Langchain. This is the fifth video focused on query translation in our rag-from-scratch series. We're going to be talking about a technique called hide. So again query translation sits kind of at the front of the overall rag flow and the objective is to take an input question and translate it in some way that improves retrieval. Now, HIDE is an interesting approach that takes advantage of a very simple idea. The basic rag flow takes a question and embeds it, takes a document and embeds it, and looks for similarity between an embedded document and embedded question. But questions and documents are very different text objects. So documents can be like very large chunks taken from dense publications or other sources. Whereas questions are short, kind of terse, potentially ill worded from users. And the intuition behind Hide is take questions and map them into document space using a hypothetical document or by generating a hypothetical document that's the basic intuition and the idea kind of shown here visually is that in principle for certain cases a hypothetical document is closer to a desired document you actually want to retrieve in this you know high dimensional embedding space than the sparse raw input question itself. So again, it's just kind of means of translating raw questions into these hypothetical documents that are then better suited for retrieval. So let's actually do a code walkthrough to see how this works. And it's actually pretty easy to implement, which is really nice. So first, we're just starting with a prompt, and we're using the same notebook that we used for prior videos. We have a blog post on agents already indexed. So what we're going to do is define a prompt to generate a hypothetical document. In this case, we'll say write a paper passage to answer a given question. So let's just run this and see what happens. Again, we're taking our prompt piping it to to open ai check gpd and then using string output parser and so here's a hypothetical document section related to our question okay and this is derived of course mel i'm just kind of embedded uh kind of world knowledge which is you know a sane place to generate hypothetical documents now let's now take that hypothetical document and basically we're going to pipe that into a retriever so this means we're going to fetch documents from our index related to this hypothetical document that's been embedded and you can see we get a few question a few retrieved chunks that are related to this hypothetical document. That's all we've done. And then let's take the final step where we take those retrieved documents here, which we defined, and our question. And we're going to pipe that into this rag prompt. And then we're going to run our kind of rag chain right here, which you've seen before, and we get our answer. So that's really it. We can go to Langsmith and we can actually look at what happened. So here, for example, this was our final rag prompt. Answer the following question based on this context, and here is the retrieved documents that we passed in. So that part's kind of straightforward. We can also look at, okay, this is our retrieval. Okay, no, this is actually what we generated a hypothetical document here. Okay, so this is our hypothetical document. So we've run chat open AI, we generated this passage was our hypothetical document. And then we've run retrieval here. So this is basically showing hypothetical document generation followed by retrieval. So again, here was our passage we passed in. And then here's our retrieved documents from the retriever, which are related to the passage content. So again, in this particular index case, it's possible that the input question was sufficient to retrieve these documents. In fact, given prior examples, I know that some of these same documents are indeed retrieved just from the raw question, but in other contexts, that may not be the case. So folks have reported nice performance using HIDE for certain domains. And the really convenient thing is that you can take this document generation prompt, you can tune this arbitrarily for your domain of interest. So it's absolutely worth experimenting with. It's a neat approach that can overcome some of the challenges with retrieval. Thanks very much.
RAG from scratch: Part 9 (Query Translation -- HyDE)
286
LangChain
20240214
HyDE (Hypothetical Document Embeddings) is an approach to improve retrieval that generates hypothetical documents that could be used to answer the user input question. These documents, drawn from the LLMs knowledge, are embedded and used to retrieve documents from an index. The idea is that hypothetical documents may be better aligned with the indexes documents than the raw user question. Slides: https://docs.google.com/presentation/d/10MmB_QEiS4m00xdyu-92muY-8jC3CdaMpMXbXjzQXsM/edit?usp=sharing Code: https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb Reference: https://arxiv.org/pdf/2212.10496.pdf
2024-06-10T10:48:00.489173
https://www.youtube.com/watch?v=vygFgCNR7WA
Hi, this is Lance from Langchain. We've heard a lot of interest from users on evaluation in recent weeks and months, and so we want to kick off a short series laying out how to think about evaluation from scratch and how to implement it yourself using Langsma. So to kind of set the stage, when new models are released like Cloud3, you often see a bunch of public evaluations reported. So on the left here is the Cloud3 blog post showing a bunch of different evals in the various rows and compared to other popular LLMs in the various columns. You've also seen maybe things like Chatbot Arena, which now has Cloud3 Opus at the top. But the question here is like, what are these evaluations? How to think about them and how could I implement them myself? So maybe a nice way to think about evaluation is just four pieces. There's a data set, there's some kind of evaluator, there's a task, and there's some means of interpreting the results. So let's actually make this concrete and look at the various evaluations that have been run on some of these public models. So human eval is a really good one. It was produced by OpenAI back in, I think, 2021. It has 165 programming problems. So it's basically related to the task of Cogen. You can see that at the bottom. And what's interesting is the evaluation method here, you can think of it in two ways. What's like the judge? So who's actually judging the result? And like, what's the mode of evaluation? In this case, the mode is like ground truth. There's a ground truth correct answer for every coding problem, and you're using unit tests, some kind of programmatic way of specifying correctness. Interpretation typically just reported as bar charts. In this case, I'm showing some results from the recent Databricks model, which they report a human eval on. But let's look at another one. So here's an interesting kind of comparative example on chatbot arena so in this case there's actually no static data set this is more dynamically generated from user interactions and the way it works is a user's presented with two different llms they prompt them both and they choose which response they like better so it's more of like an arena or like a a battle format in that sense. So again, in this case, the judge is a human. The mode, in this case, is not really ground truth so much as a comparative assessment. In terms of metrics, they often report these pairwise plots, which basically show one model versus all other models. And then the statistics tell you the likelihood of one model beating the other. They roll these up into things like elo scores which kind of tell you the likelihood of a model beating another one um kind of taken from chess so anyway you can see that you can look at different evaluation like benchmarks using these four different kind of buckets and just group them that way and think through them in that way. But we've kind of seen an interest in personalized testing and evaluation. So for example, like, of course, models are, you know, published with, you know, hundreds of different public benchmarks, but people often want to build their own benchmarks and kind of hack on and test models themselves. We've actually seen some interest in the community around this. So Karpathy tweeted about one nice benchmark from a scientist at Google DeepMind. Will DePue from OpenAI mentioned there's kind of a lot of opportunity in better evaluations. So, you know, if you kind of talk about those four buckets and break them down a little bit, there's a few different things to kind of cover here. There's a lot of surface area for building your own evaluations. So when you think about data sets, there's a few different things to kind of cover here. There's a lot of surface area for building your own evaluations. So when you think about data sets, there's a few categories. Like one is manually curated, like we saw with human eval. Build a data set of question answer pairs or like code solution pairs, right? So there's like highly curated, you define it yourself. Another is like if you have an app out in production, you have a bunch of user interactions with your app you can roll those into a data set for example of user logs and you can use lms to synthetically generate data sets for you so these are all really interesting modes of data set kind of creation now in terms of evaluation we saw examples of human as a judge like in the case of chatbot arena in that case with comparison as the mode we talked about using like unit test or heuristics as the judge against like a ground truth correct code solution in the case of human eval you can also use llns as judges and there's a lot of cool work on this um llns as judges can you know judge for general criteria which you might think of as reference free like there's no ground truth but you give the lln like I want to assess a few different things like, you know, brevity or something. So it's kind of like a reference-free mode of evaluation. And of course, an LLM can also judge or an answer relative to ground truth. So the final piece here is thinking about like, how are these typically applied? You can think about a few different categories. Unit tests, evaluations, and A-B testing. So unit testing are kind of like simple assertions about functionality. These are very routine in software engineering. They can be run online to give an application feedback. It can be run offline as part of, for example, CI or other kinds of evaluation. You can also have, like, again, like we talked talked about before a proper evaluation with a judge in this case it's not just like um you know maybe a simple assertion in this case maybe it's a more involved like human feedback or ln judge and again we talked a little bit about human evaluation and also ln based evaluation and then ab testing this is just comparative evaluations popular one here's regression testing, looking over time, or experimental testing, assessing different parameters. So a question you might ask is like, well, how do I actually get started? How do I implement some of these ideas myself? And this is kind of where Langshan comes in. So the team at Langshan built Langsmith as a platform that makes it very easy to run evaluations. And so if you go to this link here, smith.langshan.com, you'll be prompted to sign up i've already signed up so you can see this is just my like linesmith page we're going to talk about this in a lot more detail in upcoming videos but the point is likes with makes it very easy to instrument various evaluations we have a ui and sdk for building data sets versioning them editing them an, and SDK for defining your own evaluators or implementing or using custom evaluators. And we also have the UI for inspections, for trace inspections, comparative analysis. And we're going to kind of walk through in a few different videos these ideas kind of very carefully so you understand all these pieces and how to do each one from scratch. And of course, very importantly, LangSmet does not require LangChain to use. But of course, you can use it with LangChain. So that's an important point of flexibility I want to highlight. And in the upcoming videos, we're going to be digging into each one of these bins really carefully and building up an understanding from scratch as to how to build your own evaluations. Thanks.
Why Evals Matter | LangSmith Evaluations - Part 1
404
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video lays out 4 main considerations for evaluation: (1) dataset, (2) evaluator, (3) task, (4) how to apply evaluation to improve your product (e.g., unit tests, A/B tests, etc). Getting started documentation: https://docs.smith.langchain.com/evaluation
2024-06-10T22:01:10.235428
https://www.youtube.com/watch?v=N9hjO-Uy1Vo
Hey this is Lance from Langchain. This is our third video focused on Langsmith evaluations. So the first video kind of laid out why evals matter, why they're interesting. The second video laid out the core Langsmith primitives that we're working with. So now let's actually jump into some code. So again this is just the overview of the kind of eval landscape that we've talked about previously. There's data sets, there's evalu about previously there's data sets there's evaluators there's tasks care about and there's you know how do you apply your emails so all i've done is if you go to smith online.com this will be an opportunity to sign up if you haven't done all done so already i've already signed up of course so now this is showing my my workspace which we're going to talk about later um i've done some tip installs pip installed installed langsmith open ai olama no line chain install here we're just going to work with langsmith directly we're not going to involve langchen at all um so here i'm setting the api key that i got when i signed up and i'm also setting this other environment variable to enable tracing and i'm going to find a new project called test so this light chain project basically sets up a new project that I'm going to work in, and you'll see kind of how that's interesting very shortly. So here's like a first question you might ask, how do I build my own data set, right? It's a very simple, reasonable question to ask. Now let's say we want to build a data set of question answer pairs for this new blog post on the Databricks model DBRX. Really cool release, state-of-the-art open source LLM, a lot of nice detail in this blog post let's say i want to build a question answer data set based on this blog to test the system i have for answering questions in general right this is a very popular use case so what we're doing here is i'm kind of graying out everything we're not focusing we're only focusing on manually curated data set that's it so what i'm going to do i've already kind of gone through the post and i've curated a few questions and a few answers to those questions and this is just a good old pandas data frame that's it now what i'm doing here is from like smith important the client and i'm going to find a new data set dbrx so this is the data set I want to work with and what I'm just I'm just calling this create data set giving it a name giving a description I'm passing in the inputs and the outputs that I specified up here that's it so I'm running this and that runs now here's where I can hop over to Langsmith and let me move this if I go over to so you can kind of see a few different categories projects annotation queue deployments we'll talk about all that later don't worry about that for now go to data system testing and you can see a whole set of I have a ton of data set that I've been doing a lot of work but let's try dbrx that's the data set we just created I think so here it is so okay we can see created modified now let's just click on one of these we can actually just see here's the input question here's the answer and you know so that's kind of nice we can look at our data set here this test thing tells us have we done any evaluations on it. We've not. Just a set of examples. That's really it. So it's pretty nice. Now, let's say I want to update this. So I want to add a question. Again, just call create examples, data set name, or ID. There it is. I go back. It shows up. Easy. Now, I also want to say, okay, what are the different data set versions? I can rewind. Okay, okay, what are the different data set versions? I can rewind. Okay, that's what it was originally. This is what it is currently after my update. Let's say I want to edit it. I can actually go to an example. I can edit it here. There we go. Easy enough. Cancel out. That's really it. So we can go back here what you can see is we define a set of question answer pairs and we've used a linesmith sdk to just create a data set from them directly we've shown that edited the data set and we've shown that it has versioning that's kind of it, if we were to go back, and if I click on create new data set, I've saved that eval CSV. Let's say I want to create a new one. I test data set. I'll call this a key value data set. We talked about that previously. The inputs and outputs are just key value pairs, question, answer. You can see it's kind of all automatically populates boom and there it is I have it from a CSV as well that's really it it's super simple I define my inputs my outputs I've used a client to create a data set I've edited it I've shown how to look at the versions I've shown how to create a data set from a CSV using the UI that's really it again that's the foundational piece of building developer curated or manually curated data sense and we're going to be talking about kind of how to build on this next
Manually Curated Datasets | LangSmith Evaluations - Part 3
293
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video introduces how to create, edit, and version your own evaluation dataset using the LangSmith SDK. Documentation: https://docs.smith.langchain.com/evaluation/faq/datasets-client Notebook: https://github.com/langchain-ai/langsmith-cookbook/blob/main/introduction/langsmith_introduction.ipynb
2024-06-10T11:20:27.185848
https://www.youtube.com/watch?v=hPqhQJPIVI8
Hi this is Lance from LangSIM. This is the fourth video on LangSmith evaluations. So in video one we kind of laid out why evals are interesting and important. Video two we talked about the LangSmith primitives like the core foundational pieces to understand about LangSmith evaluation. And we just talked about dataset creation. We talked about an example of building a question answer pair dataset from this blog post on that recent Databricks LLM. So again, we showed how to build like a manually curated data set. Here's our inputs and our outputs. We use the SDK to build the data set and to update it. We also showed how to use UI. Now, a question I'd ask is, okay, that's great. But what if I have an app in production? question I'd ask is, okay, that's great, but what if I have an app in production? I have traces being, you know, logged over to Lanxmith. How to build a data set based on, for example, user logs? So, for example, let's say there's a bunch of generations or user inputs I thought were interesting. I want to build a data set from those so I can do future evaluations against it. So, let's do a kind of a toy example here. Let's create a new project. We'll call this DBRX. And what I'm going to do is I'm going to load that blog post. Here we go. So the same blog post we've been working with on the recent Databricks model. Now I'm going to define, here's my little toy app. So it's just answer a question. I'm using just OpenAI API. I'm not using Langchain at all here. Really simple. Answer the user question based on two or three sentences, given the context. That's really it. There we go. So let's say this is like an app. I have this in production somewhere. Users are just interacting with it. And I get a question. What are the main differences in training efficiency? And yeah, they have answers. Here's another one. Cool. and yeah they have answers here's another one cool so let's actually go over and we created a project for this move this over so go to my projects so you can see those two generations that i just ran here are going to be logged to this project dbrx so now we're in this project and we see those two generations you can look at the time it matches roughly the time that is right now pacific that's great so you know okay this is what we just saw it's actually kind of interesting to look at so here you can see this is the trace effectively so in this case the trace only has a single run so again we talked about there's runs and runs roll up into traces trace contains all the different runs in a given like application in this case it's a simple prompt response so there's only a single run and indeed the trace only has that single run in it so again you can kind of look over here here's your system prompt it plumbs in that whole blog post and you know the human prompt what here's the question and then the output so very simple now let's say these are examples of user interaction with your app you want to capture them for posterity or for evaluation or whatever just click on the two so we're in langsmith now um and you see this thing down here add data set so i click on that um i'm gonna move this over so'm going to call this a new dataset. Let's just call it like DBRX QA testing. Testing. There we go. Now it's automatically flagged as a chat dataset. Recall previously we built a key value data set based with basically question answer key value pairs for the inputs and outputs in this case this is derived from chat session so it's a bit a different data set type and this is useful for example if we want something like fine-tuning in the future the inputs are already in chat and serialized chat message format which is useful for for that which that's a more advanced use case we'll talk about later so you know that's really it we can just hit create and i'm going to go down to my data sets and let's just search dbrx boom and here it is you can see it's a chat data set it's derived from those inputs that my mock user provided um and you get the output so it's kind of basis set of input prompt output pairs from my app which i've now saved the data set which i could do work on accordingly and again just like before you can see you can edit these so for example here's a really good use case you have a bunch of user um you know interactions or inputs to your system but your system produced the wrong answers if you want to turn this into like an eval set you can actually edit these to be ground truth answer but you still have the real user inputs in there so it's just a very kind of common trick for building high quality evaluation sets um is to do this kind of logging and then potentially to clean the answers provided by the AI so set their then ground truth. So that's really it. So again, if I zoom all the way back out, what did we do? We just showed how to build data sets from user logs from this little toy app that I built that's answering questions about a blog post. So we're going to be kind of building on this in some future videos. But what we've really covered here is is ability to build data sets from logs and uh manually curate them from like question answer pairs or it can really doesn't have to be question answer pairs but now in this case i did um but you know developer curated versus in this particular case you know logs from a hypothetical user so two different ways of building data sets thanks
Datasets From Traces | LangSmith Evaluations - Part 4
329
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video introduces how to create datasets directly from logs (e.g., user interactions with your app that are captured in LangSmith). Documentation: https://docs.smith.langchain.com/evaluation/faq/datasets-webapp Notebook: https://github.com/langchain-ai/langsmith-cookbook/blob/main/introduction/langsmith_introduction.ipynb
2024-06-10T11:21:33.103096
https://www.youtube.com/watch?v=y5GvqOi4bJQ
hi this is Lance from LagChain this is our fifth video on lagsmith evaluations so our first video kind of laid out why evals are important and interesting our second video laid out kind of the core langs with primitives that we'll be working with we just talked through two two important concepts so building a data set from like a set of manually curated in our case question answer pairs um we built a data set based on this blog post about the new databricks model and i basically manually built a number of question answer pairs from that blog post i add them to my own data set that data set then was called dbrx and i used the sdk to create it so that's really it i also showed how to build a data set for user logs which is really useful for you if you want to take a bunch of actual user data like user questions um and convert them into like a data set with ground truth responses for future testing so that's another you know really useful and common technique for data set building so now let's get into uh evaluation so here's a question i build my data set how to actually evaluate my lm against it so in the second video we talked about this information flow but i'll just reiterate it briefly so we have a data set the data set has examples in my case the data set has input output pairs question answer what i'm going to do is i may have an app and we'll see that shortly i'd like a little example app um that app sees an input from my data set produces an output i also have the ground truth output in the data set i pass the user or the the ground truth output and the app output to an evaluator, and it'll perform some evaluation and return a score. That's it. Now, here's where it gets a bit interesting. The world of evaluators is actually pretty broad, and we've actually touched on this in a few other videos. So there's custom evaluators, there's built-in evaluators. Within built-in evaluators, there's evaluators for labels or non-labels. In my particular case particular case i have labels and i want to use a built-in language with evaluator so we have a bunch of them listed here and i also go over and show you so off the shelf line blank chain evaluators is is like a really uh nice way to go um so you don't have to kind of re-implement something from scratch for question answering again my data set is question answer pairs. So on an evaluator that operates on question answer pairs, right? Here's a few different ones that are popular QA, context QA, COT QA. The high level point is this, COT QA is often a very nice evaluator because it'll use chain of thought reasoning um to basically look at the llm generate answer versus the ground truth and to evaluate whether or not uh they they match um and so typically in terms for the greater llm you use like a very powerful model like maybe quad opus or you might use you know open aiT-4, for example. But that's the high-level idea. You're using chain of thought reasoning to determine the final verdict. So let's actually just walk through what's going on here. I'm going to pick that COTQA as my evaluator. Remember I built my data set, DBRX? Let's actually go over and have a quick look at that. So if I go over to my lagsmith um i'm going to go data system testing dbrx search for it here it is i have my data set i've done no evaluations against it it has four examples so this is kind of where i am currently now that's my data set name remember i built this function answer data bridge questions so i define that up here just using open ai very simple um i'm plumbing in my data set name i'm plumbing my evaluator i'm adding a little prefix like this is test qa open ai and i'm also adding some metadata metadata like i'm you know website context into you know gp35 turbo so anyway that's all that's going on here and this evaluate is all i need so i kick this off and this will kick off an evaluation so again think about what's happening look at the flow here all that's happening is i have those four questions each of those questions is going in going to my my basic llm which is this answer chain right so that's this thing so each question is getting plumbed into this here Here's a good example, right? Right here. We plumb in a question, we get an answer out. It's just doing that behind the scenes. So we take those four questions, we plumb them in, we get answers out. For every one of those answers out, I go to that data set, I fetch the ground truth answer. Again, we can see them here, right? Look, here's one of our inputs. So this input gets plumbed into our LLM. And we have this ground truth output. That's it. So let's go back. Hopefully that ran. It ran. Now here's where I do. I go to my data set. I look at tests. Now I'm going to move my little head here. So now I have a test. You see this prefix that we added? It is now, right this this thing right here we can see you know our name has that prefix in it we see some metrics here latency p99 p50 p99 um and we can see things like error rate we can see our metric and so forth so let's actually just dive in here so this is where you can really have a lot of fun and do a lot of kind of inspection of your results. So here's what's going on. The input here is the question that I plumbed in, right? Go back to our flow. The input is this thing. It's just my question. All question all right the reference output go back to the flow is this thing it just makes the correct answer okay so i have the input question i have the reference output now here's what my model returned so this is what we're assessing we're assessing this reference versus what i what i returned using that cotTQA evaluator. So behind the scenes, let's actually dig into that. So there's two things I can click on here. This open run thing opens up just my chain. Okay. So this is my chain, again, which we defined up here. So it's this answer question with open AI thing. So that's just this running on our input there's all the context and here's the question that got plumbed in here's the answer so if i kind of go back um that's what that run is that's all that's happening there now i might want to know well what did this great how does greater work what actually happened there so if i click on this little arrow it'll take me to the evaluator run and that's going there now i might want to know what did this great how does greater work what actually happened there so if i click on this little arrow it'll take me to the evaluator run and that's going to open up right here so this is the evaluator that we used off the shelf we didn't have to write this or anything we can actually go we can see we're using open ai as the evaluator which is fine and here's actually the prompt this is very useful you're teaching a quiz blah blah it gives you a bunch of criteria um okay so basically uh what's happening is this is the greater prompt and you're seeing the question and the context and the student answer so the context gives you the ground truth answer the student answer is what the model returned and then here's the output here is like the reasoning and here's the score this is really nice you can audit the grader really easily so if I go back let's zoom all the way back out what's going going on here? I define the data set. My inputs are here. My reference outputs are here. My LN generations are here. My scores are all listed, one or zero in this case. And I can dig into each one and understand what the evaluator did. I can also dig into my generations using this open run to see how they work. So if I zoom all the way back out the stack, what are we doing here? We're doing evaluation against our data set with a built-in line set evaluator. This was the information flow. And if I go all the way down, what do we just do? We had our data set of data bridge examples, questions, four of them. We used LM as a judge using a built-in line chain evaluator against those ground truth answers that we provide in our data set and we basically did an LLM evaluation that's it so we're going to be kind of building on this in the next video thanks
Pre-Built Evaluators | LangSmith Evaluations - Part 5
517
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video introduces how to use LangSmith's many pre-built evaluators for tasks such as RAG (question answering), evaluating generations based upon supplied criteria, etc. Documentation: https://docs.smith.langchain.com/evaluation/faq/evaluator-implementations Code: https://github.com/langchain-ai/langsmith-cookbook/blob/main/introduction/langsmith_introduction.ipynb
2024-06-10T11:22:54.183501
https://www.youtube.com/watch?v=w31v_kFvcNw
Hi, this is Lance from Lightchain. This is the sixth video in our series focused on Lightchain Evaluations. So in the first video, we just kind of laid out why Lightchain Evaluations are important. In the second video, we laid out the core Lightchain primitives. We then talked about how to build data sets. First, developer curated. We built one for the Databricks model, which is discussed in this blog post. So we built a question insert data set. And we showed how to edit that. We showed how to append examples to it. We then also showed how to build data sets from user logs. So if you have an app in production, you're getting feedback questions from a user, you can basically plumb those into a data set. We then just talked through how to use an evaluator, in our particular case a built-in Lancet evaluator, to judge our app which we built here. So we had a very simple app that answers questions about a given context. In this case we plumbed in the blog post and we're using GPT-3-5 to answer questions. We're comparing those GPT-3-5 generations or answers to our data set, which was manually curated. So that's really it. We used a built-in Langsmith evaluator called COTQA, which is very convenient for question answering. So that's kind of where we are. That's what we did. Now let's build on this a little bit. And this kind of gives you a summary of what we just did. Now let's say I want to do something different. I want to find my own custom evaluator. So let's say I want a simple assertion that an answer actually was generated or not. Like, was there some kind of failure in generation of the answer? Is this that to turn something empty straight? Or there can be other reasons why no one fails to produce a generation, right? So in this case, I'm changing it up a little bit. I stole my data set of question and answer pairs derived from the Databricks blog post. But in this case, I want my graders to be, it's like a heuristic, it's a hard-coded decision. It's just like, is there an answer or not, right? It's reference-free, there isn't like a ground truth there. It's just just like is there an answer or not right it's reference free there isn't like a ground truth there it's just like is there an answer or not right so it's more like criteria-based uh grading in this case you can this is where it's a little bit squishy be to think of this more as a unit test it's like a very simple assertion for functionality um as opposed to being an evaluation which is using like an lm as a judge or human as judge this is just like a quick test is there an answer present or? And of course this can be used you know an offline evaluation potentially something like CI right. So it's like a nice test or sanity check that my system is actually working. So how would I do that? So it's actually really simple. This little function is actually all you need. But let me actually explain a bit about what's going on here remember our flow diagram let's actually go back up so there's this run and there's an example so the run is my app basically my app has some output right so the run has this dot outputs which is basically the the dict or the object that comes out of my app and And then likewise, my example, which I pass in Evaluator, also has this dot outputs, which is basically everything that's available to the example. In my particular case, what I'm going to care about is, you know, in typical evaluations, I often care about just the answer. But the point is, when I'm doing evaluation, there's this example object and there's this run object. And these two things are being compared in some way so that's really it so remember i have an example and a run now if i build my custom evaluator i plumb in that example and run just that we just talked about and i can just easily fish out from the run which is my llm app i want to get the answer now let's actually go all the way back up and make sure we understand what we just said. So I'm going to go back to my app. Here's my little app, answer database question open AI. It returns a dict with this answer key. That's where we get that answer thing from. So that key depends on my app. In this case, my app returns, you can see it's a dict with answer in it. That's it. So that's why in my custom evaluator, when I go down here, I just go to my outputs from the run. Again, it runs my app. The outputs contains everything output by the app. In that case, that output is a dict, but it has a key answer. So I'm phishing it out like that. And I do something really easy. If there's nothing there, then it's not answered. Otherwise, it is. As simple as it gets. Remember, I have this data set DBRX. I'm going to run on this. So again, here's my app. Basically, we already talked about this. Here's the data set. We already showed how to build that. This is my new little evaluator is answered. I'm going to create a new prefix for it. Add some new metadata. Kick that off. that should run pretty quick and I can go over to my data set it's already done so okay this is kind of interesting so now you see two evaluations the first one we showed previously using the built-in light chain evaluator now I had this new one you can see it has this new prefix and let's open it up and see so in this case those scores are all one because there is an answer every time. That's it. That's a custom evaluator. It shows up here in Langsmith. This scoring is arbitrary. We can choose different means of scoring. But in this particular case, I just chose one or zero. Shows up here. That's it. Pretty simple. And this is extremely flexible. I mean, this is just a function. It can be anything you want. Again, all you need is the ability to fish out your output. You could do string matching. You could do arbitrary different kinds of valuations. And you can also include, for example, LLMs in this to do arbitrary reasoning over your answers and produce scores. I've used this a whole lot, it's extremely convenient, very flexible and building custom values is extremely simple, particularly once you understand that kind of, that information flow that we showed here, all that's happening is I have an example, I have a run, each of those has this dot outputs thing, which gives you access to, in the case of my run, that output object, in my case it was just a dict with answer, and this dot outputs actually gives you access to everything in this example, that's really it. That's all you need to know. And that's really all I wanted to say about the custom evaluars. It's very powerful and very general. Thanks.
Custom Evaluators | LangSmith Evaluations - Part 6
376
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video introduces how to define your own custom evaluation logic in LangSmith. Documentation: https://docs.smith.langchain.com/evaluation/faq/custom-evaluators Code: https://github.com/langchain-ai/langsmith-cookbook/blob/main/introduction/langsmith_introduction.ipynb
2024-06-10T22:06:02.974161
https://www.youtube.com/watch?v=w31v_kFvcNw
Hi this is Lance from Lanxchain. This is the sixth video in our series focused on Lanxnet evaluations. So in the first video we just kind of laid out why values are important. The second video we laid out the core Lanxnet primitives. We then talked about how to build data sets. First developer curated. We built one for the Databricks model which is discussed in this blog post. So we built a question answer data set the database model which is discussed in this blog post so we build a question answer data set um and we showed how to edit that we showed how to append um examples to it we then also showed how to build data sets from user logs so if you have like an app in production you're you know getting feedback questions from a user you can basically plumb those into a data set um we then just talked through how to use an evaluator, in our particular case, a built-in Lancet evaluator to judge our app, which we built here. So we had a very simple app that answers questions about a given context. In this case, we plumbed in the blog post, and we're using GPT-3.5 to answer questions. We're comparing those GPT-3.5 generations or answers to our data set, which was manually curated. So that's really it. We used a built-in length of the evaluator called COTQA, which is very convenient for question answering. So that's kind of where we are. That's what we did. Now let's build on this a little bit. And this kind of gives you a summary of what we just did. Now let's say I want to do something different. I want to find my own custom evaluator. So let's say I want like a simple assertion that an answer actually was generated or not. Like was there some kind of like failure in generation of the answer? So is that to return an empty string? Or there can be other reasons why no one fails to produce a generation right so in this case i'm changing it up a little bit i stole my data set of question answer pairs derived from the databricks blog post but in this case i want my greatest be it's like a heuristic it's a hard-coded decision it's just like is there an answer or not right it's reference free there isn't like a ground truth there it's just like is there an answer or not right so it's more like criteria based uh grading in this case you can this is where it's a little bit squishy you think of this more as a unit test it's a very simple assertion for functionality um as opposed to it being an evaluation which is using like an lm as a judge or human as judge this is just like a quick test is there an answer present or not and of course this can be used you know an offline evaluation potentially something like ci right so it's like a nice test or sanity check that my system's actually working so how would i do that so it's actually really simple this little function is actually all you need but let me actually explain a bit about what's going on here remember our flow diagram let's actually go back up so there's this run and there's an example so the run is my app basically my app has some output right so the run has this dot outputs which is basically the the dict or the object that comes out of my app and And then likewise, my example, which I pass in evaluator, also has this dot outputs, which is basically everything that's available to the example. In my particular case, what I'm going to care about is, you know, in typical evaluations, I often care about just the answer. But the point is, when I'm doing evaluation, there's this example object and there's this run object. And these two things are being compared in some way so that's really it so remember i have an example and a run now if i build my custom evaluator i plumb in that example and run just like we just talked about and i can just easily fish out from the run which is my llm app i want to get the answer now let's actually go all the way back and make sure we understand what we just said so i'm going to go back to my app here's my little app answer database question open ai it returns a dict with this answer key that's where me that answer thing from so that key depends on my app in this case my app returns you can see it's a dict with answer in it that's it. That's it. So that's why in my custom evaluator, when I go down here, I just go to my outputs from the run. Again, it runs my app. The outputs contains everything output by the app. In that case, that output is a dict, but it has a key answer. So I'm fishing it out like that. And I do something really easy. If there's nothing there, then it's not answered. Otherwise, it is. As simple as it gets. Remember, I have this data set DBRX. I'm going to run on this. So again, here's my app. Basically, we already talked about this. Here's the data set. We already showed how to build that. This is my new little evaluator is answered. I'm going to create a new prefix for it. Add some new metadata, kick that off. That that should run pretty quick and I can go over to my data set it's already done so okay this is kind of interesting so now you see two evaluations the first one we showed previously using the built-in light chain evaluator now I have this new one you can see it has this new prefix and let's open it up and see so in this case those scores are all one because there is an answer every time. That's it. That's a custom evaluator. It shows up here in Langsmith. This scoring is arbitrary. We can choose different means of scoring. But in this particular case, I just chose one or zero. Shows up here. That's it. Pretty simple. And this is extremely flexible. I mean, this is just a function. It can be anything you want. Again, all you need is the ability to fish out your output. You could do string matching. You could do arbitrary different kinds of valuations. And you can also include, for example, LLMs in this to do arbitrary reasoning over your answers and produce scores. I've used this a whole lot it's extremely convenient very flexible and building custom values is extremely simple particularly once you understand that kind of that information flow that we showed here all that's happening is I have an example I have a run each of those has this dot outputs thing which gives you access to in the case of my run that output object my case was just a dict with answer and this dot output actually gives you access to everything in this example. That's really it. That's all you need to know. And that's really all I want to say about loaded customer value, where it's very powerful and very general. Thanks.
Custom Evaluators | LangSmith Evaluations - Part 6
376
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video introduces how to define your own custom evaluation logic in LangSmith. Documentation: https://docs.smith.langchain.com/evaluation/faq/custom-evaluators Code: https://github.com/langchain-ai/langsmith-cookbook/blob/main/introduction/langsmith_introduction.ipynb
2024-06-10T11:24:04.629112
https://www.youtube.com/watch?v=kl5U_efgK_8
Hi this is Lance from Langchain. This is the seventh video in our Langsmith evaluation series. So our first video gave kind of a context as to why evals are interesting and important. The second video talked about Langsmith primitives. Our third video showed how to create manually curated data sets. We built one based upon this Databricks blog post. The fourth one showed how to build data sets for user logs. So if you have an app in production, you want to kind of capture user questions and create a data set from them, you can very easily do that. We talked through that. We then talked about various judges for data sets, so different types of evaluators. We showed how to use a built-in langchain evaluator for question answering. We applied that to our Dataverse data set. And we just talked through custom evaluator so again we've kind of showed this this flow diagram and go to go to those videos if you want to kind of deep dive into those topics now we're going to have a little bit of fun so this is where you know you kind of get into like very real world use cases and where you kind of get into very real world use cases and needs, you often want to do comparisons. So let's ask a really practical question. How does Mistral 7B running locally on my laptop compare to GPT-35 Turbo on this little challenge we've set up? Again, remember, we have a four-question eval set on this Databricks blog post. How does an OpenL undo versus gpd35 turbo so just a little note here i'm using olama for this um just you can download it going to uh alama.com you can do a llama pull mistral to get the model and you can kind of follow instructions here um so here's my setup i'm going to create a new function that does the same thing that we already were doing with open ai but here i'm going using mistral this is running locally on my laptop so again it just i can answer ask and answer questions about the particular blog post so that's answered i just asked a question and here we go so you know the answer is streaming out very good and it's obviously slower than open ai is exactly what we would expect but we really care out here is i want to know about quality how does it actually compare on this little challenge i built for myself so what are we actually doing uh here we would develop a curated data set of four examples on this data bricks blog post um i'm using lm as a judge again remember the built-in cot qa post. I'm using LLM as a judge again. Remember the built-in COTQA evaluator I'm using. And I have ground truth answers for every question. I'm doing an A-B test between GPT-3-5 Turbo and Mistraw running locally on my laptop. So that's the setup. And it's pretty easy. So remember, we've already built or defined this data set dbrx we've already used this devaluator cotqa so that doesn't change at all all that changes is i'm just passing in this new function now now let's go back and look at that function it looks exactly like our old one a few little twists i'm using olam instead of openai that's really it same output object same output object answer you know addict with an answer that's it simple we just saw it work here so what i can do is kick off this eval this will take a little bit because it's running locally i have an m2 mac uh with 32 gigs by the way so that kind of gives you some sense i've heard a lot of people having good results using this draw 7b but on far smaller machines though so it's a really nice open model to work with if you want and you can see it's still churning it's streaming its answers out here it's actually done but didn't take that long it ran against my four questions now here's where it can get really interesting let's go over to my data set now i can see here that there's three runs so this is our initial run uh or experiment you can think of, with OpenAI. This is that second one to do with a custom evaluator. We're not interested in that. That was just kind of a quick more unit test that wasn't a proper kind of a LLM-based evaluation. And now here's our latest one. So here's where it gets really interesting. I can click on this and this. So my mistral run my open i run and i can click this compare thing that opens up this compare mode you can already see some nice statistics here so what i can see is like average score so the first run which was open ai indeed does quite a bit better in terms of score latency as exactly we would expect mistraw is slower by quite a bit um and here's the latency distribution and so forth so you get some immediate statistics about the two runs now here's where i've done a lot of work in the past and you know this is kind of the crux of ab testing that's extremely useful uh i mean that's why it's very helpful to do this inside lengths but this is all kind of the crux of A-B testing that's extremely useful. I mean, that's why it's very helpful to do this inside Lanks, but if this is all kind of captured for you, managing this all yourself can be quite painful. Here's my first question. Here's the reference output. Here is the output returned by Mistrawl. Here's the output returned by OpenAI. So I can actually look at those in detail. I can kind of zoom in, look at the answers, and like, hey, they look very similar here that's really cool and you can see my grader also assesses them both as correct and again we talked about you can actually click on the evaluator run for each case to see why but they look good now here's where it gets a little interesting it looks like my mistraw running locally is is did not receive a correct grade um and opening i did so let's actually look at what was the question what is the context window of the drbx a drop model okay so it's 32k token right what did mistraw think uh context window is 2048 so that is definitely wrong and we would have to investigate as to why Mistral believed that to be the case. It could be, you know, there could be many different reasons why I failed for that one. But indeed, our grader is correctly grading that as a wrong response. For fun, we can actually dig in and look at that particular grading trace. And we can see why Susan's answer is incorrect the student states that the context window is 2048 um the context says clearly 32k there you go so the grader is doing the same thing and we can kind of go through each one so this is like a toy example but it shows you a very important useful concept of comparative a b testing so like you might want to compare different prompts different lms and this is a very nice way to do that you can see it's extremely simple we've just supplied our data set name um so we're of course running against the same data set where you know typically i like to apply different experimental prefix to to enumerate the different experiments i'm running so that's easy you can also capture that in metadata by the way so that's another way to kind of differentiate your experiments. And I'm using the same grader, of course. And I'm just modifying my function, which in this case was just, hey, swapping out Mistral, swapping in Mistral for OpenAI. So again, this just shows you how to use this compare mode in Lanksmith to do A-B testing really nicely. In this particular case, we're comparing Mistral versus OpenAI. We can look at, you know, kind of overall run statistics as well as granular answer-wise differences. We can inspect the grader as shown here. We can look at the runs as shown here. So this gives you a very flexible general place to do a b testing uh across different parameters in this case i use different uh different llms um and i've used this all the time for lots of different kind of projects and it is indeed quite useful it's very nice to have this all kind of managed in one place for you um so we're going to be kind of diving into some deeper themes after this this is kind of the final video of our like kind of introductory concepts so we kind of zoom all the way back out what do we talk through we just talked through manual building your manually curated data set in this case running lm as a judge against the ground truth for a b testing so we kind of went through that we also had talked through um you know same setup but basically just simple unit test using custom evaluators we talked through um yep we talked through yeah different data set creation yeah we talked through here we go ln as a judge with ground truth um so for like you know uh just just uh you know ln is greater evaluation um but no ab tests so in that case it's just looking at like a simple model and evaluating it using LM as the judge. We talked about the information flow for evaluation. We talked about different ways to build data sets from user logs, from manual curation. And that's really it. This gives you kind of like the groundwork you need to do a lot of different things. Build a custom evaluators and A-B testing is frankly covers a huge service area of use cases. And we're going to do some deep dives following this. So stay tuned for additional videos. Thanks.
Eval Comparisons | LangSmith Evaluations - Part 7
531
LangChain
20240408
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This introduces how to use the LangSmith UI to compare (e.g., different prompts, LLMs, etc) across a dataset. Documentation: https://docs.smith.langchain.com/user_guide#comparison-view Code: https://github.com/langchain-ai/langsmith-cookbook/blob/main/introduction/langsmith_introduction.ipynb
2024-06-10T11:25:28.747704
https://www.youtube.com/watch?v=Kr10khtlSzs
Hi, this is Lance from Langchain. This is the ninth video in our Langsmith evaluation series. So the prior video talked about setting up, I'll show over here, a set of test cases in a data set. So here's the data set we set up called relevance grade and it had three test cases. I can show you each test case basically has an input, text an example question and as output like a ground truth assessment of is this question relevant to the text so this is a rag challenge that I you know I work on quite a bit where you want to do document grading in line with your rag flow and what I showed previously is testing various LLMs on this challenge directly from the prompt hub. So you kind of saw over here, we have the prompt playground here. This is my grader prompt. Again, it takes the inputs that we just saw in our data set. And I can easily pull up the data set here in the prompt hub. I can just run with various LLMs you can see over here against this and those are all logged to to their to our data set and we can go inspect those results so that is kind of the workflow we talked through now we talked previously a lot about different kinds of evaluators that we can use to automate evaluations like LLM based graders. And it can be really useful to run these experiments, but actually have an LLM based grader automatically perform grading, right? Because if you remember, we're in the prompt up here, we kicked off an experiment against the data set we selected here, but then we have to go and manually review those results ourselves. What if for every experiment I run here, automatically I have some LLM grader that will just run on those results that I can inspect? So that's possible. And if I go over to my data set here, so here's the data set, here's my examples. If you look up here at evaluators here, you actually can set here. So here's the data set. Here's my examples. If you look up here at evaluators here, you actually can set these. So you see this add evaluator. This is where I can basically define my evaluator. I can set a evaluator model. And the key point is I basically can customize some evaluation prompt, right? And this can be whatever is relevant for the task I care about. And when you basically set this evaluator, it'll always run when an experiment is initiated against this data set. So I've actually already done this. So I have this JSON evaluator, and we can kind of have a look here. You can edit it, which is actually really nice. So this is using OpenAI. I'm using 3.5 because it's nice and fast and so this is what i'm going to use to grade the responses uh that we basically generated here in the prompt playground by testing different lms right so for each lm we kick off an experiment. And this grader then will assess. You can kind of see here the results are logged or written all here. And they're, of course, logged to our data set. But we can actually grade them using an LLM-based grader automatically, which is quite nice. And now here's the prompt. You're a grader. You're going to be shown a student submission for a JSON string and a reference value that should be present in that string. You see a looking score. So anyway, you're laying out the logic of what you want the thing to assess. Really what's happening here is I just want it to assess, is it valid JSON? Is there no preamble? Does it contain the right value for score? That's really all that's going on, right? So I can set this evaluator. It's now attached to my data set. That's really all that's going on, right? So I can set this evaluator, it's now attached to my dataset, and when I ran those experiments in the hub, now if I go to the experiments, you'll see this correctness column. So this has automatically been populated for you every time you run an experiment from the hub. And if I zoom in, this correctness captures what that automatic evaluator assessed about the output. So I don't have to manually review them per se. In this case, the LLM-based grader, which I defined and pinned to my data set, automatically runs a new example. And if you recall, we've shown this previously, but you can actually click on these and open up the grader traces. You can actually look and see exactly what the grader saw. So this is a prompt that the grader saw. You're a grader. This is what we saw before. Now here's what's interesting. It actually inputs the reference from the data set and the submission from our experiment. So the submission is what the LLM produced. The reference is like the correct value and again this just shows like the logic you know ensure that's proper JSON and so forth and then we can actually look at the scoring so in this case correctness means it is indeed it passes the test so for this case if you kind of sanity check it looks right right these are all JSON strings that's good there's no preamble and the value here matches the ground truth so that's good yes yes no yes yes no so that's actually great and we can actually just go back and we can look this was for example open AI so that's cool we can look at an anthropic also that got 100% so that's great now here it's interesting we see chat fireworks llama 13b gets a zero. We actually saw this previously, but we can dig in and see why. Is it because it has this preamble? We don't want that. It's actually the graders doing the right thing. It's catching that and telling us that indeed this should be scored zero as it is. This is actually a really nice trick. You can basically pin an evaluator to a dataset. It can automatically run when you kick off these experiments in the hub, that evaluator just runs behind the scenes for each experiment and automatically scores the results. Really convenient. And of course, I still like to manually review in some cases, but imagine I'm kicking off a large number of experiments. The grader gives me a very quick way to kind of sanity check. So for example if i go back here let's say this was like 20 different experiments on different lms i can use that correctness column from the automated grader to sanity check and say okay which ones look good and bad so i could like drill in and decide which lm i actually want to use for this particular task so the two video the press video and this one kind of pair really well together. It shows how to use LLM-based graders with data sets to automatically perform grading for any experiment you kick off in the hub. All with no code. Very convenient. So I do like to use this when I'm making determinations about what LLM to use, for example, for this grading task, which I've used in a lot of RAG tutorials. I like using LLM graders to sanity check quickly that give me like good candidates to then drill into. So that's really kind of how this can be used together with prompt-based experimentation. Thanks.
Attach evaluators to datasets | LangSmith Evaluations - Part 9
415
LangChain
20240416
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video shows how to attach evaluators to datasets so that they automatically run when an experiment is performed on the dataset . Documentation: https://docs.smith.langchain.com/evaluation/faq/experiments-app
2024-06-10T11:26:34.061252
https://www.youtube.com/watch?v=zMgrHzs_cAg
Hi this is Lance from Langchain. This is the 11th video in our Langsmith evaluation series focused on summary evaluators. So the motivation for this is let's say for example I have an evaluation for document grading. Now we talked about this previously I've used this quite a bit in the context of RAG where I basically do retrieval and I grade each document to determine whether or not it's relevant to the question. So I basically have an LLM grader that does this and it returns a binary score of yes or no. Now I've built an evaluation set for this previously and we've actually looked at it a little bit. It's called relevance grade. If you look at the examples, each example has a document, an example question, and a score as to whether or not the document is relevant to the question. This is just a nice little toy data set we can use to test this grading process. So there's three examples. You can see one is no, because the question and the document are not related. So it's an example of an irrelevant retrieval. So here's kind of the question though. How can I create a metric to summarize performance in this data set in a custom way? So you kind of saw before when we run our evaluations, I can go back to the data set, I can look at the experiments, you do get this correctness score. Now all this is is just simply a summation of the scores from each individual experiment or actually I guess it's a mean in fact. So it's basically the mean of scores across your experiments right in this particular case. Now what if I want a different thing? What if I don't want for example correctness to be my aggregate summary metric across this whole data set? What if I wanted to find something custom? That's the motivation here. So let's kind of just like kind of see where we are in landscape. So we've built that the data set we just showed. So it's a developer curated data set or manually curated data set. We're using LM as a judge evaluator. I have ground truth and a performing evaluation with LM as a judge. Now, the only thing different is that I'm going to define a custom metric for my evaluation across the data set. So this is the overall flow. It matches what we've talked about this before already. But the only trick is I'm going to define a custom metric in this particular case. So here's kind of the challenge I'm going to pose. First, I'm going to find a grader using OpenAI. And actually, we've seen this before. I define a data model that returns a pydantic object that's basically a binary score of yes, no. I bind that object to my LLM right here. So it's going to produce a structured output. I have my grader prompt here. And that's it. That's my grader prompt here and that's it. That's my grader chain. And then I just have this function predict open AI, which will take a dataset input and it will just invoke my grader and return my grade as an output grade, which will just be yes or no. That's all that's happening here. Now what I'm gonna do is I'm also gonna find a misdraw grader. Same flow, but in this case I'm gonna use misdraw'm also gonna find a misdraw grader same flow but in this case I'm gonna use misdraw running locally in my laptop as the grader and I'm gonna use a JSON mode to enforce JSON output but that's actually what we want to test here like can I reliably get a binary score out and here's my grader prompt and so forth so the setup here is that I'm going to do a comparative a B test between my misdraw grad my open a greater on my evaluation set so we've seen that all before nothing new there the only thing I'm going to add is if you go down I want a summary metric on this data set and what I want to do is I want a metric that basically combines precision and recall. So, you know, if you recall, precision basically is the measure of true positives over all your positives. So basically that's true positives and your false positives. OK, now recall is kind of the other side of the coin. Recall is the true positives over all possible positives in the data set. So the intuition here typically is that these are kind of intention, right? If you have a data set and you basically guess everything as a positive, right, just promiscuously, you just guess positive every time, you can have perfect recall, or you by definition will have perfect recall. But your precision will be quite bad because you'll probably have lots of false positives depending on the structure of your data set. And the inverse is also true. So there's a nice metric called F1 score, which basically unites precision and recall. It's basically the harmonic mean of precision and recall. You can look that up. But the intuition is that it combines these two metrics. So it's commonly used in machine learning and other fields as a way to kind of combine the tradeoffs between precision and recall. So I'm going to find an F1 score evaluator. Recall that my data set, every experiment or every run or sample in my data set is yes or no. So I can just go through my samples, and each one is scored as basically yes or no. So here's where I'm doing that comparison. And I'm just bidding them into true positives or false positives false negatives based upon the the route the relationship between my example which is the ground truth reference and my grade prediction which you recall we wrote out grade here in both cases so this the output of our predict function just returns a grade and that's our yes, no score. We compare it to ground truth here. We log the number of true positives, false positives, false negatives, and we compute F1 score using that. That's all we did. So again, we have this function compute F1 score here. And now all I need to do is again, call our good old evaluate method with our particular chain of interest. So this time I'll evaluate misdraw. And I'm going to use my F1 score evaluator. That's it. Let's kick that off. So that should be running now. And that's running against our data set. So you can see we've set our data set here, relevance grade. And it looks like that run. Great. Now we can do the same for openai so that's running now we can go back to our data set relevance grade let me actually do it go here so again i'm in langsmith here this is my data set i'm in the day set relevance grade i can see my experiments and these two tests just rolled in so this is test score openai test score mistral you can see this is the prefix we added to our experiment so that's where that's that's coming from and what you'll notice is we have this f1 score now logged with each experiment which is pretty nice and we can go ahead and open those up so let's open up in comparison view. So this is pretty cool. We can basically look at the summary metric here. So that's cool. We can see that both have a summary metric of one. So they both got all of our test samples correct. And we can confirm that here. So here's the reference. Here's what each chain produced. That's it. Really simple, very nice, very convenient to be able to find custom evaluators on datasets. And you can see right here we're just viewing the both in comparison mode. And we can confirm that the summary metric is indeed correct because it looks like both of our chains got every answer correct. So that's it. Very convenient to use and I recommend exploring with a little bit more. Thanks.
Summary Evaluators | LangSmith Evaluations - Part 11
446
LangChain
20240418
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video explains how to define evaluators that compute customized summary metrics over a dataset. Documentation: https://docs.smith.langchain.com/evaluation/faq/custom-evaluators#summary-evaluators
2024-06-10T11:27:53.437935
https://www.youtube.com/watch?v=IlNglM9bKLw
Hi, this is Lance from LangChain. This is the 13th part in our LangSmith evaluation series. We've been talking about rag evaluation. In the last video, we saw how to do comparison of my LLM-generated answer to a reference answer. We just kind of dove into that. Now let's actually talk about some of the other types of evaluation we can do. In particular, let's talk about hallucination evaluation. So this is basically an evaluation of our retrieved documents relative to our answer. So you recall from before, we build a rag chain, we have a retriever here, here's our chain, and the chain returns both answer and contexts. Okay, so that's kind of the setup. We just talked through, basically again, reference answer evaluations. This is what we went through. Now let's talk about what I'm going to call type two. So answer hallucination. through. Now let's talk about what I'm going to call type two. So answer hallucination. So here, actually, we can reuse a lot of the same thinking and a lot of the same components. Again, before we use a langchain string evaluator, because we're doing string comparisons fundamentally, and I previously showed that we can use the CUT, the chain of thought QA evaluator for answer evaluations. Now, in this case, we're going to change that up a little bit because we want to do comparison now between our answer and the reference. So this is basically the reference documents. Okay. So this is kind of an internal comparison for if something is present in the answer that's not in the documents, we want to know, we want to penalize that. A common thing that happens here is hallucinations, for example. So all we're doing is our rag chain returns both answer and context, and we're going to pass those into our evaluator. Again, our answer will be the prediction, and we're going to use our context now as the reference. So we're basically doing comparison between our answer and the reference being the retrieved documents. That's really all that's happening. Now here, instead of using the COTQA evaluator, I'm going to use the criteria evaluator. So it's another option that you can use, and you'll see the criteria is kind of nice because it allows us to supply some kind of custom criteria to the grading process. And we just talked through the information flow. Now here's really the crux of it. You'll look, this is actually very familiar to what we saw before. It's just another line chain string evaluator, just like before. In this case, it's a slightly different name. So it's called labeled score string. And the key point here is I can pass a particular criteria that I want to evaluate those strings on to the evaluator. Now, it's still LLM as judge, just like before. But in this case, I have this custom criteria field that I can pass in. So here's where I can kind of add all the kind of rules or kind of logic that I want the evaluator to follow. So here I'm going to say, is the assistant's answer grounded in the ground truth documentation, right? And I tell it like what a score of zero means. I tell it what a score of five means. A score of five is the answer contains some information that's not in the document. So some hallucinations. Zero is like it's all hallucination. Ten is that it's perfect, okay? Now this normalize thing just lets me like normalize the scores between zero and one as a convenience. So it's basically gonna take the score returned by the greater will be zero to 10. I normalize that by 10 to produce zero to one. Now here is where we saw before. This is where I hook up my run and my data set outputs to the inputs of my prompt. So this is the key part. So we can look at this here. So in this particular case, my prediction is going to be the answer, just like before. is going to be the answer, just like before. The reference is actually just going to be my run context or my retrieved documents. And the input is just the example input. So that's actually not very important for this eval, but we'll just keep it in there. But the key point is this. My prediction is my answer, and my reference are the retrieved documents. So that's all that's happening. So I can define that. Now I kick that off. I'll add an experiment prefix here to note that it's hallucination grading. And that's really it. So this is kicked off. I can go ahead to my – so I've just kicked off evaluation. And now that's run. And I can go over to my data set I've just kicked off evaluation and now that's run. And I can go over to my data set and I can look at the results. So here's again, that hallucination, yeah, prefix that we added and I can open this up. So now I can see the scoring of all my runs here. And again, this is looking at the answer relative to the retrieved documents. So it's kind of a hallucination score. And so the results are kind of mixed. It looks like in one case, it gives it a score of 0.5. One case, it gives it a score of perfect. I can look at, for example, the bad case here. I can open up that run. So here we go. We can go down and we can actually look at, this is of course the prompt. So it has all the retrieved documents and it also has the answer. And then it has the LLM reasoning. So the assistant provide detailed comprehensive response, but it found something it didn't like, it gives it a two. So anyway, the key point is this shows you how to hook everything up. You can see all we did was we basically hooked up our chain answer and context into our evaluator reference and prediction, and then we just let evaluator run. We give it instructions here in this criteria field, and it's all logged to Langsmith. We can see the scores here. So anyway, it's a pretty nice trick, and definitely encourage you to look into using this criteria evaluation with labeled score string.
RAG Evaluation (Answer Hallucinations) | LangSmith Evaluations - Part 13
337
LangChain
20240424
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video focuses on RAG (Retrieval Augmented Generation). We show you how to check that your outputs are grounded in the retrieved documents of your RAG pipeline. You can use LangSmith to create a set of test cases, run an evaluation against retrieved documents, and dive into output traces – helping you ensure your responses are hallucination-free. Documentation: https://docs.smith.langchain.com/cookbook/testing-examples/rag_eval#type-2-answer-hallucination
2024-06-10T11:28:51.070656
https://www.youtube.com/watch?v=xTMngs6JWNM
Hi, this is Lance from LanxChain. This is the 15th video in our LanxSmith evaluation series. We're going to focus on regression testing. So the past few videos talked a lot about rag evaluation. Just to refresh, for example, we talked about how to evaluate the rag chain answer versus a reference or the answer versus the documents for hallucinations, or even the documents relative to the question for kind of retrieval relevance. Now, here's a common situation that comes up when I think about building a rag pipeline. What if I want to change the LLM? So, you know, a good example is some pretty cool and interesting new open-source LLMs have recently come out, like LLAMA3, PHY3. How can I know whether or not I can replace, for example, GPT-4 as my baseline with one of these for my particular use case in my particular rack pipeline? This is where the notion of regression testing comes in. So we've previously talked about building an eval set, running it on, for example, our answers, and evaluating answer relevance or answer accuracy in this case relative to ground truth answer. Regression testing allows me to run this on a few different examples, like let's say different LLMs that perform the generation step, and to identify cases that either improve or regress relative to my baseline. So the key point of regression testing is the ability to isolate instances in your eval set that are either getting better or worse. So that's kind of the main intuition. So to kick this off, I'm just going to build a new eval set. So that's all defined here. It's going to be called RAG QA LCL. So it's going to use the same line transpression language documentation I used for the prior RAG videos, just a few other, a few slightly different questions. So I've built that and I've built my index. So again, we're just indexing the language expression language documentation. Again, I'm choosing a large chunk size, just like we saw before, no major changes. So I've laid out my RAG pipeline and this looks very similar to what we saw before. The only difference is I've added a slight modification where I can specify different providers. So in this case, I can choose O-Lama, which allows me to run a number of open source LLMs locally, along with OpenAI. And again, just like RAG prompts for the open source models versus OpenAI. But again, no major differences here, really. I'm just extending my RAG model or my RAG chain so I can run with different LLMs. That's all all i'm doing here and i'm defining three of them so i have my rag chain with open ai right here with llama 3 and lever eye chain with 5-3 again llama 3 and 5-3 we use open ai as a provider and i'm just specifying the model i've already downloaded these locally just with simply olama pull llama 3 or a olama pull phi3 and it'll just run locally on my laptop so that's a whole separate thing but that's a pretty cool nice thing about olama so i've defined an evaluator for answers so we saw this previously this labeled score string evaluator basically lets me supply some criteria so in this case you know i'm asking is the assistance answer grounded in the ground truth answer and i give it you know some criteria for scoring one through ten ten being the best one being the worst so that's all all that's going on here and i kick off my eval right so i run eval on open ai first i run eval on llama 3 i run eval on Phi3. So those evals have run. Now let's go over and look at my data set. So here is the data set we created. Again, RABQA LCL, let's just confirm that here. Yeah, so this is the eval set we're running on. And it's the eval set we created up here. So that's right here. So we've created our eval set. Now we can look at our experiments. So this is what we saw before, right? So this is our experiment page. No major changes here, right? We can see our three experiments run here. Now what I'm going to do is I'm going to select these three experiments. So I'm going to go one, two, three, and I'm going to open the comparison mode. This is where things are going to get kind of interesting. We're going to see some things that are a bit new here, and let me just scroll over a little bit so first we're going to see is the three experiments we selected are all here okay and we're seeing this idea of a baseline so what does baseline mean the baseline is basically on reference so in my particular case so let's just say gpd4 was my prior baseline right so gpd4 was a model that I had been using, and I want to see how Phi 3 and Lama 3 compare to GPT-4. So what I can do really easily is, so these are our three examples, just like we saw. So again, I can click on this, and this will basically open up the example. So I can see here's the question, here's the answer. So that's just like, you know, we've kind of gone through that before. So here's the input. Here's the reference output. And now here's my three experiments. And you can see, just like before, we've set a baseline. And I can see the scores from my evaluator here. Now what's interesting is I can scroll over and I can look at my two other experiments. So Lama 3 here, PHY3 here. So what's pretty cool is immediately your attention is drawn to these red or green cells. And this simply means if it's a red cell, it means the performance is worse than my baseline. If it's a green, it means it's better. So in this case, right, my baseline was 0.1. In this case, PHY3 actually for the second example does better. It got a score of 0.5. That's why it's green. Otherwise, you can see there's regressions across the board for questions 1 and 3 for both of the models. Now, I can do a few different things here. First, like we saw before, I can always click on this button, and that actually opens up the evaluator run itself so i can audit the evaluator so that's again we talked about that previously it's really useful you can kind of see why a particular score was given so that's really cool and i can go back and you can see my baseline is still preserved so that's actually really convenient um now this thing at the top tells me the number of total regressions or progressions or improvements relative to my baseline. So I can click this and it'll highlight only the cases that improve. Click this and it'll highlight only the cases that regress. Again likewise here. So again it's really useful and nice way to drill into the cases that are improving or getting worse relative to the baseline. In this case there's only three examples, but in realistic cases, you may have an e-mail set of maybe dozens or hundreds of examples. So this can very easily be used to isolate your progressions or your improvements and your regression. So it's really helpful. I do want to also show here, if you go to the display, you can actually open up additional metrics. This will show you token usage, this will show you token usage. This will show you latencies. So you can kind of close that down to make less or less. You can see the full text of the generations if you want to see that. But again, it's by default removed. So it's like way cleaner. And that's probably the main things I'd like to show you there. You can, of course, choose different experiments or additional experiments if they're not all chosen here. It's easy to add experiments. So anyway, this is a really useful and convenient thing. I do this all the time, so I'm always running experiments with different LLMs. I always want to isolate cases that are improving or regressing, and I want to dig into why. And basically using these filters, I can very easily isolate the cases that are, for example, getting worse, drill into why. Again, in each of these cases, you can see, you actually can look at the trace again. So this will drill into the trace that resulted in the generation. I can go down, I can audit the entire chain. So this is like my chatalama invocation here. This is like the prompt, which contains the context, my rag answer. So again, a lot of things you can play with here, really useful, and regression testing is obviously super powerful for identifying cases that your chain, for example, is getting worse or better with different perturbations, like different models, different chunk sizes, or other things. So definitely play with this. It's extremely useful and helpful in building LLN applications. Thanks.
Regression Testing | LangSmith Evaluations - Part 15
488
LangChain
20240501
Evaluations can accelerate LLM app development, but it can be challenging to get started. We've kicked off a new video series focused on evaluations in LangSmith. With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video focuses on Regression Testing, which lets a user highlight particular examples in an eval set that show improvement or regression across a set of experiments. Blog: https://blog.langchain.dev/regression-testing/ LangSmith: https://smith.langchain.com/ Documentation: https://docs.smith.langchain.com/evaluation/faq/regression-testing
2024-06-10T11:29:59.327198
https://www.youtube.com/watch?v=_ssozegykRs
Hi, this is Lance from Lionchain. So OpenAI just released GPT-4.0 or Omni today, which is a pretty exciting release. It reports both significant improvement in non-English languages, much faster and cheaper in the API than the prior state-of-the-art GPT-4, so that's actually really exciting. And it also incorporates multimodality, so both audio-visual as well as text. So that's a really good thing. Now, the question you might ask is, let's say you already have an app, you're using an OpenAI model, like, say, the state-of-the-art GPT-4 Turbo. How do I make a decision about whether or not it's actually safe to upgrade to this model? And then when we talk about safe, that can kind of mean a few different things. So on one hand, you can think about, are there any regressions to the performance of the application itself? Like I have an app already, it's using GPT-4 Turbo, I'm using a bunch of prompts I've already tuned, and do those prompts just translate over to the new model seamlessly? Do they exhibit some odd behavior or regressions that I actually would wanna characterize, right? Anytime I should change the model in your application, you should really wanna investigate, like, okay, how does the performance of the application actually change? So that that's one and also things like user experience now it reports better latency but like what is that actually does actually work in my case so how does the user experience translate if i change my app from for example gpd4 turbo or let's say i was using another app like one of the gpd35.5 variants, given the new cost here, can I make the leap up to now GPT-4 Omni? And what are the implications on the performance of the app itself or things like latency? So those are all things you'd really want to examine to determine whether or not it's actually safe to make this switch for your user. And you can really think about this in three different pieces. I have a data set of examples. For example, in this case, I have a RAG app of input-output pairs that are kind of my ground truth input, my ground truth output, right? So I have a data set. I have an evaluator that I'm going to show you how to define that'll look at my ground truth answers versus my RAG app answers. And I have a RAG app that simply takes in GPT-4 Omni versus Turbo as a user-defined input. That's all I really need to do. And then I'll show you how to use the UI in LankSmith to really dig into the differences, i.e. look at regressions or improvements that come if I upgrade my app to Omni. That's really it. So here's just some code. I'm defining an index here for RAG. I'm taking the language and expression language documentation, which is around 70,000 tokens of context. It's basically a subset of our documentation. And I'm creating a vector store locally from that. Then I'm defining this ragbot class. It's a super simple app. It takes in OpenAI and a model name. Actually doesn't use langchain at all. It's just using the raw OpenAI wrapper, a wrapper we have around the OpenAI API. So it's super simple. Does retrieval, basically does generation with a standard rag prompt. You all can use a local model if you want with the Lama, but in any case, here's all I need to do. I'll define three different functions, which simply instantiate my rag bot with different settings. So here I'll use GPT-4-11-6, GPT-4-turbo, 4-turbo, and then GPT-4-omni, the new model. That's it. so basically i have three different uh functions that all use my little rag bot with different gpd4 versions that's all i need to do um second my data set so i've already created a data set in langsmith you can actually if you go if you go to like if you go to langsmith you can go to your data system testing tab. My data set is defined right here. It's called LSL eval. And I can look at examples. And over in examples, I can actually see here's all my ground truth inputs and outputs. So again, these are inputs and outputs related to the documentation I built my app from. So this is all consistent. So that's really all I need to do in terms of dataset. In terms of evaluator, I'm going to build a custom evaluator that's going to take, see this reference is the reference answer here. So this is for every question I have a reference answer. And it's also for my chain going to get my RAG pipeline prediction. And it's going to compare my reference to the prediction using this prompt right here this is all super transparent um really nice simple way all i have to do is basically use um you'll see down here but this lang smith evaluate function that i can pass this this evaluator function into it's super simple um and that's all i need to do it's basically going to take in my run and my example. So the run is my chain. The example is the example from my data set. It can extract from my run the predicted answer. It extracts from my data set example the reference answer. It compares. I'm using this prompt here. I use LLM as a judge. In this case, I'm using GBD4 turbo as the judge. And it outputs a structured object as grade, which I specify here. And I normalize that. So the raw score is between 10 best, 1 worst. Normalize that by 10. So it's 1 and 0.1. That's all I need to do. Super simple. I run evaluation on my three models right here. Super simple. I run evaluation on my three models right here. And we can then go over and we can look at our data set. It'll now have three experiments. So that's what you see right here. So you can see my experiments are GPT-4-11-6, GPT-4-Turbo, and GPT-4-O. Or sorry, GPT-4-O, OMNI or O. And what you can see here, which is pretty nice, the answer accuracy. So I can see my answer accuracy, this is now the aggregate score, goes from 0.84 up to 0.88. So it does increase, and you actually see that in this plot here, it does increase between my three experiments. So it does appear that OMNI is indeed better than the other two variants in terms of answer accuracy. Now, let's say I also want to look at latency. This is the other major thing that's, in fact, this is really the thing that they really highlight with Omni. I can look at the latency, P50 latency between the three experiments, and I actually can see the latency shows a big difference. So, you know, between GPT-4 Turbo, the prior state of the art in this model, the latency drops actually quite a bit. It looks like maybe a 30% latency drop here. That's really good. You can see my P50 goes from 23 seconds down to 16 seconds. And the answer accuracy, as we showed before, increases. So that's kind of a clear win there. And if the cost benefits carry over, then I would know pretty definitively, this looks like a really safe upgrade for my particular app in terms of latency, in terms of my evaluator accuracy, and in terms of cost, which we actually do have costs typically logged, but we don't have the costs yet in for this model, I believe. And so this just came out today, but you'll have that available to you very soon in Lanxmith as well. So if I want to dig in deeper, I can just click these three experiments. I can go to compare, and this opens up our comparison mode. So here what's pretty cool, I can set a baseline. So in this particular case, I'm going to set 11.6 as my baseline, and I then can compare Turbo and GPT-4.0 or Omni. And what I can see as I go through, I can see all the cases that actually get worse or better. And you can see it summarized at the top. Two get better, four get worse. In this case, six get better, three get worse. So, again, we can see that Omni improves, which we also saw from the aggregate scores. But you can really look granularly at each example and see why it improves. You can click here to actually open up that particular example. You can dig in. Here's the reference input, reference output. Here's my three generations. You can really look granularly and explore whether or not you agree with the evaluator. So this really gives you a nice way to granularly go in, convince yourself that it's safe in your particular case to upgrade to the new GPT-4 Omni. And if I zoom all the way back out, in our particular case, this is just, again, eval set, 20 questions relate to language and expression language. What I can see is the, I'm going to bring up the latency as well, the answer accuracy gets better with GPT-4 Omni, which is great. The latency drops quite a bit, so that's also a win. And if the cost reduction is as reported, then this would be a clear win, a safe upgrade in my particular case. Thanks.
How to evaluate upgrading your app to GPT-4o | LangSmith Evaluations - Part 18
497
LangChain
20240513
OpenAI recently released GPT-4o, which reports significant improvements in latency and cost. Many users may wonder how to evaluate the effects of upgrading their app to GPT-4o? For example, what latency benefit will users expect to gain and are there any material differences in app performance when I switch to the new GPT-4o model. Decisions like this are often limited by quality evaluations! Here, we show the process of evaluating GPT-4o on an example RAG app with a 20 question eval set related to LangChain documentation. We show how regression testing in the LangSmith UI allows you to quickly pinpoint examples where GPT-4o shows improvements or regressions over your current app. GPT-4o docs: https://openai.com/index/hello-gpt-4o/ LangSmith regression testing UI docs: https://docs.smith.langchain.com/old/evaluation/faq/regression-testing RAG evaluation docs: https://docs.smith.langchain.com/old/cookbook/testing-examples/rag_eval Public dataset referenced in the video: https://smith.langchain.com/public/ea1f6ca5-de52-4d36-bd7b-fde3faa74a70/d Cookbook referenced in the video: https://github.com/langchain-ai/langsmith-cookbook/blob/515f4140cb2576ea93051ea5bb4adec652e31893/introduction/langsmith_introduction.ipynb
2024-06-10T11:32:03.230516
https://www.youtube.com/watch?v=3cDtDI2W-xA
Hi, this is Lance from LangChain. We're continuing our LangSmith evaluation series focused on backtesting today. So to motivate this, let's say we had an app in production. Say, for example, it's one of our RAG apps that we kind of talked about in some prior videos. And the particular RAG app in our case is GPD4 Turbo using a vector store. So that's in production. We're collecting production traffic from users using this app version. Now, what happens if we wanna try different variants? Like let's say I wanna try context stuffing rather than using my Vector Store. One really convenient and nice thing to be able to do there is take a bunch of production logs we've already collected and just run my new app variant on them and see how the output compares, right? That's a really common thing you might want to do. So if you look at our framework here, we're talking about the data set. Actually, you can come from existing production logs. That's really convenient. We don't have to build some, like, curated data set from it. We can just take user data that actually has been, you know, contributed to our existing app and turn that into a data set. So that's step one. And we can test, then, a different app variant on those same inputs. So that's step one. And we can test then a different app variant on those same inputs. So that's really useful. And we can use like a pairwise evaluator to compare our new variant of the app to our old or production version that's running currently. So that's like a common workflow. Now let's walk through this. So I'm going to create a new project. I'll call it backtesting. I'll create a new one here, backtesting, let's say V2. And here's my app. So let's say this is running in production. Here's a few user questions related to language and expression language. So if you call this particular app is actually ingesting information about language and expression language. And so these are all running. Cool. So we can just kick a few of these off Great. So I've run five different user questions through my app. Now, assume this is, you know, obviously we're doing it in the notebook, but assume this is a production app. It's just out there. These are users interacting with your app, right? So that's what we're simulating here. Now, I've created this project back to S&2, we can go over to Langsmith, I can look at my projects page, and it's here. And so great. So here's all my traces, they're all logged here. So that's fantastic. Now what I can do is this code right here, we'll take these traces. So I can do is this code right here will take these traces so I can specify my run filters in terms of start and end time. And I can choose the project that I want to grab from. So in this case, it's this project we just logged to. And basically what we can do here is I can create a new data set. So I'm taking these user logs effectively right here. And what I'm doing is I'm turning them into a new data set. So now we'll see if you go over to our data sets page we have this new data set called backdressing v2 and it has like the exact time and so forth and what you'll see here is pretty nice those user inputs are now just simply inputs in my data set so I've kind of sucked them into a data set that's all I've done now what I can do here is I can run an evaluation on this data that we just created using a different variant to my app. So I'll call this predict rag answer GPT-4 turbo. It's a different app variant. This particular variant doesn't use a vector store. It does context stuffing, which we can talk about a little bit later. But let's say I kick off that evaluation. So I run that right now. So if I can go back to my dataset here, we can see a new experiment has been kicked off. And what's pretty cool about this, when I create this dataset, my initial experiment is the prod baseline. So that's basically my production app that we've been using that we logged from. And that is actually what we collected our inputs and our outputs initially from. So this is kind of like our baseline, right? This collected our inputs and our outputs initially from. So this is kind of like our baseline, right? This is our inputs. These are outputs from that baseline version we looked at initially. Now, I've just run a new experiment using our variant, right? So this is a GPT-4 Turbo. So that's what I just kicked off. And this is running now. You can see it's still running. You can see kind of the name here, my experiment prefix GPT-4 Turbo. And that is what you can see kind of the name here, my experiment prefix GPT-4 Turbo, and that is what you can see here. So GPT-4 Turbo, that's great. So that's all running, and we can check and see the state of that experiment. It looks like it's still going. Cool. So there's one more to go, And it looks like it's done. So that's great. So now we are finished. So here's where things get kind of interesting. So I've run. So let me just back up. What do we do here? So if I go all the way back, first I had a project that I was logging to. So this is simulating, for example, like a production app. And these are user interactions that are being logged to some particular project. What I've done is I've taken these input and output pairs, and I've pulled them into a new data set. And you can see that right here. I suck those in. And by default, when I do that, that baseline is grabbed as my prod baseline. So this is what my production app inputs and outputs. Now what's cool is I was able to just run a new experiment with my new app version on those same user inputs, but now I have my new output. So these are the outputs from my new chain. That's great. Now this all comes together in kind of a cool way because I can use a pairwise evaluator to compare them. So I just created a new pairwise evaluation prompt for this particular task. And I think I already have it open here, great. So this particular task, if you recall, is code question answering based on the language and expression language documentation. So I set up my pairwise prompt and I just say, previous acts as a judge to evaluate the quality of code responses from two assistants related to language, I give my criteria here, begin your evaluation comparing the responses based upon, do they contain a clear overview of the problem? Do they contain code imports? Do they contain code solution, right? So you don't allow length to affect the results and so forth. So anyway, that's my evaluation prompt. Now I've defined, I can define this pairwise evaluator. We just already talked about this in the last video. So you've already seen a good example of that. I'm creating a pairwise evaluator right here. And what's pretty cool is I can run that pairwise evaluator on my two experiments. So this is my, this is the name of my most recent experiment. It's GPT four turbo boom. And my initial experiment, I can go back and get that name. So it's gpt4 turbo boom and my initial experiment i can go back and get that name so it's this prod baseline so i can just plug those in okay so i'm running a comparative evaluation between my prod baseline and my new app variant that i ran back testing on i kicked that, so now this is running. And of course, it's gonna be logged to the same overall dataset here. So if I go back to my dataset, recall we had two experiments, my prod baseline, my backtesting, which is running the same prod inputs with my new chain, GPT-4 Turbo with context stuffing. And now I've kicked off an experiment that's comparing the two. So it's comparing my prod baseline versus my new variant, GPT-4 Turbo with context stuffing, versus the baseline, if you recall, is actually using retrieval from a vector store. The variant is actually using GPT-4 Turbo with context stuffing the entire Lcell docs into the into the lm which has some benefits over retrieval because you don't have issues related to retrieval of the correct document chunks you're just passing in everything and so there's some benefits there that we could talk about at length a different time so this is running and it looks like it might be finished this is done and this is pretty cool. So I can go here, I can look at here's all the inputs, here is my prod baseline, here is my variant. This is what I ran backtesting on. And what's pretty neat about the pairwise value, we can really see the preference. And so in this particular case, it prefers the output from our variant over the baseline. And again, just as we saw in the previous video we can actually click into here and investigate why so you can go to the evaluator and it actually gives you an explanation as to why so anyway you can really dig into this this is a pretty nice workflow so if we zoom all the way back out what do we really do here uh well we had an example of an app running a production. This is kind of a simulation of that. We collected five user inputs. We turned those into a data set. That's what we did here. We then ran what we call backtesting on that data set with a new app variant. So like I want to test a new app version. In this case, my baseline used retrieval from a vector store. My new app variant used context stuffing. So I ran that evaluation. And so then that resulted in a generation. So that's right here. For every input, I produced a new generation or output from my variant. That's great. And then I ran this comparative eval saying, hey, which one's better? Here's my prompt. And I kicked that off. I just added the names of the two experiments. And as we saw before, you get this pretty nice comparative assessment with detailed evaluation. So pretty useful thing to do. If you have any app running in production, you can pretty easily grab those production logs, turn them into a data set, do back testing on them with like different variants, different chains you want to test. And you can even do those production logs, turn them into a data set, do back testing on them with like different variants, different chains you want to test. And you can even do things like pairwise evaluation to say, hey, which one's better or worse based on some criteria that I can define. So really nice workflow, highly convenient for kind of testing at, testing different variants of chains that you want to, do you want to put in production? Thanks.
Backtesting | LangSmith Evaluations - Part 19
605
LangChain
20240516
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video focuses on Backtesting. We show you how to build a dataset directly from production logs. We then show how to run different versions of your app on those logs and use pairwise evaluation to compare them against the baseline production app.
2024-06-10T11:33:19.329767
https://www.youtube.com/watch?v=jypHvE1vN5U
Hi, this is Lance from LandChain. We're continuing our Langsmith Evaluation Series. We're going to be digging into online evaluation here a bit more. So if you recall, this is kind of a framework of how to think about evaluation in general. On the left here you see different types of data sets that you can work on. So you can work on kind of manually curated data sets, you can work on user logs, different types of evaluators. And what I really want to draw your attention to over here is the differentiation in terms of how you can apply evals. You can apply evals on production traffic. So that's like I have an app in production, and as my app is running, I'm doing evaluations of some sort on it. Or of course, I can also do evaluations on curated data sets or assertions, right? So online evaluations live in this top bucket and you can really see it here. So basically we're focused on production traffic. So I have an app in production and I'm evaluating something about that app while it's running. Now, a common thing that people like to evaluate is something like toxicity. So for example, does the user input contain any kind of personally identifiable information or toxic queries or something along those lines. So toxicity is a thing that oftentimes you want to be able to detect within an application as a developer. So I'll show an example of this. And I have a project that I've set called Rag Online Evaluation. And that's here, a few traces here you can see. And here's basically my Rag app. So with this app, I'm basically simulating user questions here. So here's basically my rag app so with this app I'm basically simulating user questions here so here's like an input question here's an input question and those are all logged to my project so that's all that's going on now let's say I want to create a new rule that'll evaluate whether the input question contains any toxic information or personally identifiable information so all I have to do is I go to Rules, and I basically say Add Rule. So this is going to allow me to create a new online evaluator. I just simply click here, Online Evaluation, Create Evaluator, and I'm going to use one of the suggested prompts that we have already. So right here you can see Tag LLM Generations. So this is a prompt that allows you to tag arbitrary inputs with a label or a score based upon some specified criteria. So you can see in the prompt template, it already is set up and I can choose my model. Let's say I want to use GPT-4 Turbo. It already has this kind of preset for me. So you're a classifier, they'll tag input text with a score based on supply criteria, and you're tagging based upon insults, threats, negative comments, any personally identifiable information, right? And you pass in the text. Now, what's pretty neat about this is I can actually test using this preview what this will look like on my particular chain. So with this preview, boom, I can actually see the text it's going to get is, for example, my most recent trace. So in my case, the most recent trace was this test case. How do I create a rag chain? And I also plumb in this OpenAI API key as like a test of an adversarial prompt or a prompt that contains personal identifiable information I'd want to flag. I have my score criteria down here, so I'll basically return a score, one or zero, and I'll also return the explanation for the score. So that's set. I just hit continue here, and then I'm gonna name my, I'll name this toxicity, or I'll call this guardrails. Input, name this toxicity, or I'll call this guardrails, input guardrails, and I'll save that. So now I have a new online evaluator called input guardrails that'll run on the inputs to my chain. So I go back to my app, and I've passed in this OpenAI API key, and I want to see whether or not my online evaluator can actually flag that. So I go back, and now my runs or my traces, you can see here's my app, here's the runs within my app, so this is kind of full trace we're looking at. We can see it's tagged with feedback. So the feedback I get is a score, in this case, of one. So if you look back at the prompt, one means that indeed there is a personal identifiable information within this prompt. So we can even look here, we can look at the explanation for this, open the evaluator run, and we can go down, and we can see, yes, it is scored as one. The test contains personal identifiable information as it includes potential API key, which is sensitive information that should not be shared publicly. So it's a good example of the ability to set up an online evaluator against a simple project. In this case, a mock rag app. It can run on the inputs of your app really easily, and it can establish any customizable guardrails or things that you want to flag for and return kind of a very simple customizable score. Yes, no, one, zero, in order to identify that the input contains information you want to flag. Thanks.
Online Evaluation (Guardrails) | LangSmith Evaluations - Part 21
291
LangChain
20240522
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video focuses on Online Evaluation focused on guardrails for PII or toxicity. We show you how to configure an LLM-as-judge evaluator with a customized prompt that can detect PII or toxicity on the inputs of an example RAG app. Check out the docs for online evaluation: https://docs.smith.langchain.com/how_to_guides/monitoring/online_evaluations
2024-06-10T11:34:04.141152
https://www.youtube.com/watch?v=FQMn_FQV-fI
Hi, this is Lance from Langchain. We're continuing our Langsmith evaluation series focused on dataset splits. So let me start by giving kind of a motivation for why we might want to use dataset splits. I have a RAG app that I've been testing throughout this series focused on the Langchain expression language documentation. Now very recently updated those docs from v0.1 to v0.2. So what's going to happen now is I have an eval set of 20 questions related to v0.1 docs that I now want to store as a split. And I want to see if I create an app using these newer v0.2 docs, are they backwards compatible in terms of evaluation with my existing eval set? Or do I need to change my app to load information maybe from other sources because basically my worry is when we upgrade the docs things may have shifted around so what I was loading from here uh may be insufficient now with these vr.2 Docs so basically my concern is that the data loading is a bit different and I may have regressed in my existing eval set with this dataset upgrade. So what I'm gonna do is I'm gonna go ahead and in my datasets here, I'm gonna create a new dataset. I'm gonna select a CSV. I'm gonna pull this in. So this is my existing dataset. So this is LCEL, question and answer, and I'm going to call this splits. Splits for L-C-E-L QA, RAG, QA. I'm going to call it that. It's a key value data set. You can see my input and output fields, and that's great. So I create this data set. Nice. That's all done. Now, here's where things get kind of interesting. dataset. Nice. That's all done. Now, here's where things get kind of interesting. I'll move this over here. So I have 20 questions here. I'll open this up so we can look. 20 questions here are related to, are from my original dataset. So that's all these guys. So this is my initial eval set that I've been using historically throughout this throughout the series related to to rag and language expression language i'm going to add these to a split and i'm going to create a new split and i'm going to call it um i'm going to call it uh lcell v 0.1 to to identify that these uh question answer examples were taken from the originals v 0.1 docs um cool so now i have this split that's been created right here now i can go back and this is my original so this is the original set of 25 examples now i have five new examples which you can see down here. Cool. So that's these. These are examples that I've added. I'm going to call this LCEL V0.2. So these are five examples I just very quickly put together that are on that main page of the v0.2 docs. So it's kind of like a test. These should definitely work reasonably well because I've created them newly. My bigger question is, do I see regression on the initial split? So right now what I have here is two data splits. So one, these are five questions I put together pretty quickly based upon the newer structure of the docs. It's kind of like a test. So this is actually what the new structure of the docs looks like. This is the new landing page for language and expression language. And you know, it's pretty nice. It has like lots of rich information here. That's great. Now here is the old landing page and it's just different, right? Different in structure. This old landing page kind of fanned out to a bunch of subsections, which is where my initial eval set drew all its information, right? So if I kind of zoom back out, what do we have here? We have two splits for my dataset. This LCELV01 is my initial set of questions, my initial 20 questions. And this LCLV2 are five new questions that I derived from this newer version of the docs. So this is kind of like my earlier original eval set. This is a new eval set. But they're all related to language and expression language. I'm just going to store them as two splits within this project. Cool. So I've created a RAG app here. That's all set. I have my RAGBOT. Now, we've used this previously. That's all set. Let me kind of skip through some of this. And the key point is simply this. Right now, all I need to do is I can just take this dataset name here. Cool. And I can kick off an evaluation. I can specify the two splits I want to work. Great. So both evaluations ran. I ran one on the newer split. I ran the other on the older set of questions, my original set of eval questions. So cool. Here's the two results. I can see the split names here. So this is V2 or the newer ones, V1 or the older ones. Performance is a lot worse on the older ones. It looks fine on the new ones. So what this tells me is I can investigate this a little bit more deeply, and I can see that performance is pretty bad on my original set of eval question-answer pairs. And what this kind of would tell me is, okay, I need to do a little bit of work to make sure that my document loading within the new doc structure, as you can see what we do right here, see what we do right here, we should then make sure that this is actually gathering all the information necessary to actually perform well on our individual eval set. So if you actually go to the docs here, we can see the doc structure has changed a lot. So it's very possible that we're no longer kind of getting the right documentation or pages necessary to answer all the questions that are relevant. So this kind of thing will be really important if, for example, I have a production app that's loading from set of documentation, the documentation structure changes. I want to actually determine how much regression do I see with this documentation change? And how do I then need to modify my data loading to actually make sure that I have full coverage on the questions in my original eval set? So that's kind of a good example of where you might use splits. Also things like partitioning, you know, test examples from training examples, if you're doing fine tuning, is another very kind of very classic application for using splits. But anyway, this really just shows how you can actually set them up using the UI really easily. Thanks.
Dataset Splits | LangSmith Evaluation - Part 22
400
LangChain
20240528
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video focuses on dataset splits, which allow you to partition datasets and evaluate on specific splits. Docs on how to create/manage dataset splits: https://docs.smith.langchain.com/how_to_guides/datasets/manage_datasets_in_application#create-and-manage-dataset-splits Docs on how to evaluate on a dataset split: https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-dataset-split Try it out in LangSmith: https://smith.langchain.com/
2024-06-10T11:35:09.479604
https://www.youtube.com/watch?v=Pvz24JdzzF8
Hey this is Lance from LanxChain. We're continuing our LanxMath evaluation series talking about repetitions. So the intuition here is actually pretty straightforward. We've talked a lot about different types of evaluations, for example that run on like larger eval sets that have different maybe complex elements judge evaluators. And in a lot of these cases we run an evaluation, we get some statistics or some metrics on our performance across the data set, We run an evaluation, we get some statistics or some metrics on our performance across the dataset. And you might ask the question, how reliable is this? You know, can I trust this result? If I run it again, can I reproduce it? And you can have noise introduced from different things. Your chain may produce kind of variable outputs depending on how you run it. Again, LLMs are largely, for the most part, non-deterministic. You know, your LM is judge evaluator. Again, it's using an LM, so there's some variability that can be introduced from the grading itself. So in any case, the idea of repetitions is a way to address this. Automatically run your evaluation n times to see whether or not it's consistent. It's very intuitive, it's very useful, and I've done this manually many times. But Langsmith is actually introducing a nice kind of new flag that's run simply with the SDK, where you can specify some number of iterations to run, and it's all nicely supported in the UI. So let's take an example case. This is an eval set I've already worked with related to language and expression language. This is an evaluator that I've actually used previously with a ragchain that operates on language and expression language documentation. And so this evaluator is basically going to grade an answer from a dragchain relative to a ground truth answer between 1 and 10. Okay, so that's all that's happening here. And this is my ragbot, which I'm just initializing with a few different parameters. I'm going to run it with GPT-4-0 with vector store, and I'm going to run it with GPT-4-0 with VectorStore, and I'm going to run it with GPT-4-Turbo without VectorStore. So these are just two example experiments I might run. And here's where it gets interesting. When I set up my evaluate function, just like we've done previously, I can just specify num repetitions and specify how many times I want to run this experiment. So in this particular case, my eval set has 20 questions. It's run three times. And so I run this, it actually runs 60 different evaluations. That's it. And again, I can run it on different configurations, just like I've done previously. So if I go over to the UI, here's my data set, just like we've seen before. Here's my experiments. And you're going to see something new and kind of interesting here. You're going to see this repetitions flag noted here. So what's cool about this, this allows you then, if you open up any of your experiments, right? So let's, for example, look at this experiment GPT-4 Turbo, you can see if you open up for any example, this is your input, right, here's your rag chain, you actually can see each repetition run. And so what's nice about this is that you can compare the answer for each of your repetitions. So that's kind of what you see here. And you can look at the grading. So you can see there's interesting differences depending on kind of the repetition in the answer itself, which can happen because certain LLM chains do have some variability, right? So the answers can differ by the chain. And also the grader given the same output can sometimes change, right right so what's nice about this is i can kind of go through my data set and i can actually investigate um cases of variability in the output um you know in this particular case you know one is grade one in one case repetition three is graded as one point eight point8, 0.7, right? So what's nice about this, these scores reported here are the mean of those three repetitions. So what's nice is these perform some smoothing across variability that's inherent potentially in your chain itself or in your LMS Judge Evaluator. It's a nice way to build better confidence in your results. In this particular case, this is working with a larger, more complex eval set. So in this case, it was a 20-question eval set. You can look at the examples here. These are kind of harder questions, so I do expect that the various experiments are going to have more trouble with them. And I'm using an LLMS judge evaluator. In this case, with custom criteria, we're going to be grading from 0 to 10. So again, that's like a more tricky grading scheme. There's more opportunity for variability there. And I can see in my experiments that indeed, if you kind of dig in, we showed some examples previously, but there is some variability across my grading. You know, grade of 1 here versus 0.5, 0.5. And, you know, the ability to run with repetitions gives me a little bit more confidence in the result. So it also makes it easier to compare with some confidence across different experiments when you've used repetitions to smooth out noise in your grader or in your chain itself. And really, that's kind of the intuition behind using repetitions. It's very intuitive. You can see I've run a number of these different experiments with three repetitions each, and this is kind of the aggregate of those means for each example being reported so i have a little more confidence in the difference in the difference between my various chains uh looking across these experiments relative to looking at a single trial or a single experiment that only ran on a single repetition so really that's just that's all there is to it's really simple and it's a very nice feature i've used it extensively just kind of manually but actually having is now as a feature and likes of it makes it much easier to run experiments with repetitions to build more confidence in your results thanks
Repetitions | LangSmith Evaluation - Part 23
335
LangChain
20240530
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith. This video focuses on repetitions, which allows you to run multiple trials of an experiment. This can be very useful to build confidence in a result, especially in cases where the app being evaluator or evaluator itself have some variability in their behavior. Docs: https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_llm_application#evaluate-on-a-dataset-with-repetitions
2024-06-10T11:36:03.271288
https://www.youtube.com/watch?v=4rupAXVraEA
Hi, Harrison from Langchain here, and today I want to talk about a series of features that we're releasing as part of Langsmith. So if you haven't used Langsmith already, it's our LLM systems ops platform for logging, monitoring, debugging, testing, evaluation of your LLM apps. And we're releasing a series of new features specifically around production monitoring and automations. So, Linksmith is really handy for getting a sense of what's going on in your application, and that doesn't just mean what's going on during development time or in offline mode. It also means what's going on when your application is live and in production. And so we've added a bunch of new features to help you understand what's going on and then take action going to have detailed videos on all these features, and then also some more end-to-end use case videos on how to combine and use these features for very real situations. So as an overview of what we'll be covering, first we're going to talk about filtering. So as you get more and more logs in your production service, you need to be able to filter into subsets that you want to explore really easily, and we've added a really good filtering experience. We're then going to talk about some of the monitoring charts that we've added and all the things that you can do with them. So this provides a really great way to get an aggregate view of what is going on in your application. After that, we're going to talk about threads. So one of the main UXs for LLM applications is chat. And so being able to group different turns of the conversation together into a sensible view and have a kind of like a great bird's eye view of that thread or that conversation is really important. And so we've added a view specific, not at the trace level, but at the thread level. And finally, we'll dive into automations. And so automations are basically ways of taking filtered subsets of data automatically and doing things with it. So one thing that you can do with it is you can send it to a dataset. Another thing you can do with it is you can send it to an data set. Another thing you can do with it is you can send it to an annotation queue, and I'll explain what an annotation queue is. And then the third thing you can do with this, and this is a really big new feature we've added, is online evaluation. So with online evaluation, you can define a prompt to run over some subset of traces and leave some feedback and automatically evaluate your runs that you see coming in to the production traffic. So I'm going to cover all these things and then I'll also cover a few use cases and so what you can do with these and and really like real world problems that you can solve with this concept of production monitoring and automations. with this concept of production monitoring and automations. All of these features are part of LangSmith, which is our SaaS platform for doing logging, monitoring, testing, and evaluation. You can use LangSmith whether you are using LangChain or not, so it is completely independent. In order to sign up for an account you can go to smith.langchain.com and you can sign up for an account for free. You will probably want to do that before we continue with the rest of this video. Once you've done that, come back and jump into a future guide. Let's get started!
Introduction: Monitoring and Automations Essentials with LangSmith
219
LangChain
20240402
You’ve shipped your AI application to production and users are starting to interact with it, congrats! But as with all software products, the hard work *starts* on launch day. How are people interacting with your chatbot? Is your application performing well or hitting rough patches? Who was affected, and how bad was it? Once you diagnose the problem, how are you going to fix it? In this video series, we’ll show you how to use LangSmith to: 1. Monitor your application with aggregates and drill downs, giving you the trendline and fine-grained filtering to understand your application performance. 2. Set up automations to take action on production data, either for online evaluation, data set construction, or human annotation. 3. Enhance performance of your application by leveraging the production data and feedback collected from monitoring and automations. We’ll run through an example of how you can incorporate this feedback to optimize your application. Documentation: https://docs.smith.langchain.com/monitoring Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:38:55.129916
https://www.youtube.com/watch?v=OXAkjTqLV4c
In this video, I want to talk about monitoring. So it's great to look at traces individually, but oftentimes you want to look at aggregate statistics to get a better overall sense of what's happening inside your application. And we've added a monitoring tab to enable exactly this. So I'm going to walk through this in linksmith, but I'm also going to highlight this documentation that we have here, and I'll link to it in the description below. So in linksmith, you can go into a project that you have. I'm going to go into chat link chain, and these are runs that make up the chatbot over our documentation. I'm then going to click on this monitoring tab up here, and I'll see that I get these monitors of various statistics over time. And these are statistics that we find to be important when developing an LLM application. So here this is just a basic count of traces over time and of LLM calls over time. So here there's, you know, they correlate pretty heavily, but it's possible that I could these could not be kind of like as closely correlated. I can also see the trace success rates over time. So I see that I had a slight dip here. But other than that, it's pretty good. I can also see the trace success rates over time. So I see that I had a slight dip here, but other than that, it's pretty good. I can also see latency over time. I can see tokens per second. I can see feedback over time. And so I track user score and we also track vagueness as well. And so these are two different feedback measures that we can see over time. Tokens we track as well as cost. And so we've really emphasized this for LLM specific features that the above mentioned, as well as things like streaming. So how often streaming is used, time to first token, and things like that. So this is great for getting an overview of what's happening in your application. You can also zoom in and out. So I can zoom out to a 30 day time period and see this a little bit more over time. I can also zoom in more closely. And so here's what's happened in the last hour. Another thing that I can do is filter based on metadata or rather group based on metadata. I can do both. So when I click on metadata, if I click LLM, I can basically then have these results grouped over time. And so what's going on under the hood is that in ChatLangchain, we actually rotate between five different LLM providers. Anthropic, Cohere, Fireworks, Google, and OpenAI. And so I can track the various statistics within these groups. So this is really helpful for basically changing parameters of your application and then tracking it and seeing how it performs. So some interesting things that I can see are basically trace success rates. So here, when we had this dip before, I can actually see that it was Cohere that was erroring a little bit, and it caused some of the errors. I can also track latency over time. So I can see right here that OpenAI and Fireworks seem to be the quickest ones to respond. I can also track feedback. So here it's a bit more mixed. We don't get a lot of feedback on ChatLink Chain, unfortunately, but you can track feedback over time and you can basically see how different parameters are performing. And so this is really cool for tracking different aspects of your application. And here we show off tracking different LLMs, but it can be other features as well. You can have different prompts that you rotate through. And as long as you attach them as feedback, you can filter and group based on them and get these modeling charts. Another thing that you can do is filter into these subsets really easily. So here I can see that I had this dip in success, and so what I can do is this this corresponds to an increase in error. Right, so if I go here I can see that we have this spike here from Coherent to have a pretty high kind of like error rate. And so if I click on this data point what this is going to do is this is going to actually give me all individual traces that are in this data point on the screen. So here I can see that I have all the ones with failure. And so here if we check into chat, go here, I can see that I got a timeout error. And so it's really useful for diving in to different subsets of the data and really focusing in on what's going wrong there. That's pretty much it for monitoring. Hopefully, this shows you how you can use the monitoring dashboard to really track the performance of your application over time, as well as group different runs and track those different subsets over time, which can allow for some really interesting comparisons in an online setting. And I think that's important because oftentimes you can do a bunch of tests and we have a great kind of like testing framework as well but that's mostly offline and it often changes when you put things in production people use it in ways that you don't expect and so being able to track that performance in an online manner is super important and so these are some of the features that we've added to enable exactly that
Monitoring: Aggregate LLM stats and metadata grouping in LangSmith's interactive dashboard
301
LangChain
20240402
Monitoring allows you to get an aggregate view of what is going on in your application over time. LangSmith provides monitoring charts that allow you to track key metrics — such as success rates, latency, feedback, cost, and qualitative characteristics with our new feature, ✨Online Evaluations✨ LangSmith also allows for tag and metadata grouping, which lets you mark different versions of your application with identifiers and view how they are performing side-by-side. This is helpful for A/B testing changes in prompt, model, or retrieval strategy. Documentation: https://docs.smith.langchain.com/monitoring Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:39:51.385545
https://www.youtube.com/watch?v=fzNSFuqtF_M
In this video, we're going to talk about filtering. As you log more and more data to LangSmith from your application, it becomes really important to be able to filter that and dive into different subsets of the data that you want to look at. We've built out a lot of functionality around that, and with that you can do some pretty advanced stuff. And so that's what we're going to talk about in this video. There's also documentation for this, and I'll link to this in the description below. So in order to take advantage of filtering, you can go over to Langsmith, you can go to a project of your choice, and for this one I'm going to go to ChatLangchain. And so ChatLangchain is an application that we expose that is chat over our documentation for Langchain. So you can see up here that I have this filter button and if I click into it I can see that there's one filter applied by default and this is isRoot equals true. So what this allows is this allows me to see all top level traces. So remember if I click into a trace there's actually multiple runs that are going on that's happening inside this trace. So when I set is root equals to true, this only shows me the top level view of that. If I remove this filter, then I start to see all the individual runs show up below. In addition to controlling filters from up here, I can also choose a few filters from the sidebar here. So here we have a bunch of different filter shortcuts for commonly used filters. So filters on feedback, feedback source, different types of metadata, the name, the run type, the status, any tags, things like that. And so if you want to go to a quick kind of like filter, this shortcut place is a good place to go. Inside this filter place, I'll add back this like is root equals true to just look at the top level traces for a little bit. And so there's a bunch of things that I can filter on. I can do a full text search. So I can search for traces that mention memory. So when I apply this, I can now click in. And one of in here somewhere is basically memory. So here we can see that memory is specified. So I can do a full text search over all the traces that I get. What else can I add? I can search based on input and output. I can search based on name. And so name is really useful when I want to filter sub runs. And I'll do that in a second but first I want to go through the is root equals true still I can filter based on error or not so let me filter it for where errors happen and I can click in and I can see what exactly went wrong in the error and so here it looks like it was cancelled so if I have a bunch of errors occurring I can filter to runs that had errors, and then basically, yeah, dive into them and see what's going on. So this is a good way to monitor for what bad things are happening. Speaking of bad things happening, another thing that I can filter on is based on feedback. So here in ChatLinkChain, we have a thumbs up, thumbs down. That's registered as user score. And so I can filter for where user score is zero and these are all the data points where i got negative feedback for my application so i can click into any one and i can see data points that an end user rated as bad and so that's really helpful for diving into places where the feedback was negative other things i can do or i can search for latency and so this is for like runs that might be longer than 10 seconds. I can go like that and I get a list of all the long runs. I can filter based on tag. I can filter based on metadata. So one of the things that we log in ChatLangchain is the LLM that we use. And so here I can choose where and we attach this as a metadata tag. And so I can filter for metadata where the LLM is fireworks, for example. And all of these runs use fireworks under the hood. Another thing that I can do here is I can magically create a filter with AI. So I can specify something like runs longer than 10 seconds. And then this will convert it into a filter and add it up above. And this looks like correct. So I'm going to give it some thumbs up some feedback. So if I don't want to kind of like go diving around in here for different and figure out like all the different things I need to choose manually, I can specify in natural language here and it'll run there. If I remove the is root equals true filter, another neat thing that I can do is basically search based on name. So if we go, if we remember from the top level trace, there's a bunch of subruns, and maybe I want to look at all the specific subruns. So let's look at the specific subruns for retrieve documents. So now I have all the retrieve document steps, and if I click into here, there's actually some nested things here, right? So this shows up as part of a larger trace. But basically, I can do things like filter on sub components of a chain and then I can apply the same exact filters as before. So if I want to look up where retrieve documents has a latency of more than one second, I can click into this here. And so here you know we can see that yeah, everything within here is longer than, and this is, this is just this specific kind of like subset here. There's also a concept of advanced filters. And so let me maybe, let's keep these filters and let me click here. let's keep these filters and let me click here and two three things pop up trace filters tree filters and a raw query so trace filters apply to the root run of the trace so remember i removed the is root equals true and so i have all the sub runs here but one really common thing to do is basically look for sub runs where the parent run has some attribute. Specifically in this case what I want to do is look for subruns of retrieved documents that take longer than one second where the parent had a negative feedback, let's say. So I can click here into trace filters and I have all the same options as before and so I can go feedback, user score is zero. And now if I look at a run, so I have, this is the retrieve documents where it took longer than a second because I still have that filter applied. And if I go up to the top level run and I look at feedback, it had a user score of zero. So basically what I can do is I can filter the sub runs based on the trace attributes. Again, this is really important when you're collecting things at the top level. And this could be feedback from the user. This could also be other metadata that you assign automatically at the top level, but you want to filter for sub runs. I can also do the opposite. So I can also I can also do the opposite. So I can also go back to here, I can say is root equals true. And if you notice here, I can't add trace filters, because when is root equals true, then any filter here is the same as a trace filter. But one thing I can do is a tree filter. And so this will basically filter these runs, if any run that happens below matches some criteria. So I can add something like this, and so let's do retrieve docs, and let's add another filter for latency, one second. And so now here, if I look at this, for example, I can see that this filters for where retrieve docs is greater than one second, but it gives me all of the top level traces. So I can see that this filters for where retrieve docs is greater than one second, but it gives me all of the top level traces. So I can also do something like feedback on the top level, user score is one, and this is the this is very similar to the query that I did before, but in the reverse order. So I'm filtering for top-level traces where the feedback, where the user score is one, and where there's some sub-run where retrieve docs has greater than one latency. And so if I click into here, I can see that I have the user score equals one, and the retrieve docs is greater than one second. The last thing I want to show is just copying and pasting these filters. The last thing I want to show is just copying and pasting these filters. So here, if I copy this filter, I get basically a string representation of this exact filter that I have here. What I can then do is I can paste this either into this raw query bar right here or into the SDK that we have. And this will basically apply this filter. And so you can see that it's a query language and equals is root true. I can also modify this here if I want and then when I click it, it applies it. Note that it adds it as filters. So if you have previous filters, you'll need to basically remove them. If you want the filter that you post to be the only filter present. That's pretty much it for filtering. Filtering is really, really powerful to let you dive into subsets of the data based on attributes of the trace, attributes of the parent run, attributes of child runs, and we've added a lot of functionality around that. So I hope you enjoy this feature to be able to explore your data more handily.
Filtering: Advanced run analysis with new filters and AI Query in LangSmith
591
LangChain
20240402
Whether you’re trying to understand the experience of a specific customer segment or find traces associated with poor user feedback scores, filtering allows you to focus on the subset of traces that matter to you and to drill down into the details you need. LangSmith lets you filter based on a variety of trace attributes, and also offers full text search on inputs and outputs. We’ve added new functionality for allowing for filtering of runs based on attributes of either their *parent* or *child* runs. This is particularly useful for filtering runs that are part of an agent (or any multi-step) workflow. Documentation: https://docs.smith.langchain.com/monitoring/faq/filter Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:42:06.499848
https://www.youtube.com/watch?v=n8WHuupE_i0
In this video, I want to talk about threads and tracking threads in LangSmith. So LangSmith captures traces, and each trace can have a subset of runs. But these traces are oftentimes different calls and invocations. So one way to think about this is when you're having a conversation with a chatbot, each time the chatbot responds, that's one trace. There might be many different things happening inside it. It might be doing a retrieval step and then responding, but that's one trace. But when you have a conversation, you send a message as the chatbot sends a message, you send a message as the chatbot sends a message, and each of those messages are separate traces. But when debugging and trying to understand what's going on, it can be useful to tie those traces together so you have a full view of the conversation that's happening. And so this is where threads come in. And basically what you can do is you can attach a thread ID to traces and then you can view and group traces by those thread IDs. So I'm going to walk through a quick example of doing exactly that. So we have an example here where we're not using link chain at all. This is just raw open AI and where we're tracking this thread ID. And so there's three different ways that you can specify a thread ID. It can either be called session ID, thread ID, or conversation ID. And so here you'll see that we're putting session ID as an extra metadata argument. So if I copy this code snippet, I'm going to bring it over to my Jupyter notebook I have. I'm going to run this. And let's take a closer look at what exactly is going on. So I'm using OpenAI. I'm importing this traceable decorator. This has to do with tracing. This is a really easy way to basically trace your runs. And then I'm defining the session ID. And so this is a unique identifier that I'm going to be passing through in the metadata. I have this function here, which is a nice little thing that basically just calls OpenAI. And then I have my conversation that I'm simulating. So I have the messages that I send in first. Hi, I'm Bob. I send this into the assistant. And I pass in this Langsmith extra, this metadata key. And this is important because this metadata key isn't known ahead of time. And so if it was known ahead of time, we could have specified it as part of the traceable because it's not known ahead of time because it's this unique session ID and it's going to be different for each session, I'm going to pass it in with this LangSmith extra. Then I'm going to get back the response. I'm adding it into my list of messages, and then I'm asking another one, what's my name? And then I'm calling the assistant again. And so if we go to LangSmith, these are getting logged to my default project. And so if we go to Langsmith, these are getting logged to my default project. And so if I go to threads, I can see that I have a new thread here. I can click into it. And here I have basically this this very chatbot conversation like view where I can see the the conversation as it kind of like unfolds. From here, I can also open the individual traces. So if I want to see the kind of like full trace here I can open it up and it's not that interesting because I just have a single call here. But the point is that you can view the conversations together but then you can easily drill into individual traces if you want to and so it provides kind of like the best of both worlds. I'm going to show this again with langchain this time. So here let me copy this code snippet, I'm going to go into the notebook, and we can see here that I am basically I'm creating a really simple chain that's just a prompt plus a model. I'm creating my list of messages and then I'm creating this run config and so this is just a dictionary with this metadata key. I'm specifying conversation ID this time, so you can see how I'm alternating the different keys that I could specify. And then when I call the chain, when I use.invoke, I'm passing in this. This is just my input. But then I'm passing in config equals config. And remember, I defined my config up here, and it's just this metadata with the conversation ID. I then add to my messages and then I call it again. And so let me run this. Now I can go back here. I can go back here. I can see that I have a new conversation thread here. And when I click on this, this is what happened when I called it with link chain. So hopefully this is a good example of how you can easily attach this thread ID to any traces that you have, and then you can view them in a very user-friendly way, again, whether you're using link chain or not. That's it for this video.
Threads: Unified chat views for conversation debugging
281
LangChain
20240402
In chat applications, there is a back and forth between human messages and AI responses. Each message turn is a trace, but you may want to see the whole thread history in one place if you’re trying to understand or debug a conversation. With Threads, we've now introduced a way to view the full back-and-forth of a conversation in a single view. Documentation: https://docs.smith.langchain.com/monitoring/faq/threads Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:43:01.640398
https://www.youtube.com/watch?v=ak2AIiX0P_A
In this video, I'm going to talk about automations. So, so far we've covered a bunch of functionality for how to allow you to dive in manually and pinpoint data points and view data points and debug things and all that is great and looking at your data is so important and you absolutely should do that. But oftentimes you maybe want to set up some automations to do things over the vast number of runs that you're getting in production automatically. And that's exactly what we've built. And so this is going to rely pretty heavily on filtering. So if you haven't watched the filtering section, I would highly recommend watching that first. Because basically the way that automations work, and here I'm going to switch over to Linksmith, but the way that automations work are that you can go in to a project, you can apply some filter, and then you can set up a rule to run over a random subset of those data points. So let's break that down. So here I'm in the chat link chain project. This looks over all runs in our Chatling chain documentation bot. I can create a filter. So again, this is where the filtering section comes in handy. But let me create a really simple one that is something like all top level runs where feedback, where user score is positive. And so these are all runs that got a thumbs up from the users. And so I can look at these manually. I can inspect them, that's great. But I can also add a rule. So I'm going to click on this, and it's going to pop up a little sidebar. I can give a name to this rule. So let me give thumbs up as a name. I can also adjust the sampling rate. So by default it's set to 1, but I can drag it to be anything between 0 and 1. This is going to represent the percentage of the runs that meet this filter that I'm going to apply the action to. So let's talk about actions. So there's three actions that we have. Send to annotation queue, send to data set, and then online evaluation. I'm going to deep dive into all three of these later on, but at a high level, sending to annotation queue allows you to send data points to an annotation queue where humans can go in and review them in a really easy way. Sending to a dataset allows you to send runs to a dataset, and you can use this dataset for testing and evaluation, you can use it for few shot examples in production, you can use this data set for testing and evaluation. You can use it for few-shot examples in production. You can use it to fine-tune models. It's basically creating a collection of data points. And then online evaluation is going to be a very deep dive section later on, and what that is is it's going to run an LLM over this subset of runs and then apply some feedback automatically to that based on the LLM scores. So this is how you can set it up from the project. You can also view and edit all these rules in your settings page. So if you go to settings and then if you go to rules, I can see that I've created a bunch of these rules already. I can edit them and delete them from here as well. So I can go in and edit. I can edit them and delete them from here as well. So I can go in and edit, I can see what we've created, I can edit it if I want to, or I can just delete the rule. And so you can manage all these automations from this page. That's an overview of this automations feature. The real power comes in with with what exactly you're doing with the data points. And so I'm going to talk about the annotation queue and online evaluation in deep dives in the next two videos, because I think those are really important to understand. And then I'm going to talk about a few use cases that you can do. So end to end kind of like framing of problems that you can do with these automations. Let's go to the next one.
Automations: Streamlined data workflow for Datasets, Annotations, and Online Evaluations
228
LangChain
20240402
Production AI applications generate a large volume of data, and in order to extract insights and improve your app, you’ll need a way to sift through information quickly. While it’s useful to look at data by hand, creating automations to route your data can save a considerable amount of time. Documentation: https://docs.smith.langchain.com/monitoring/faq/automations Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:43:42.875204
https://www.youtube.com/watch?v=3Ws5wOS9eko
One of the actions that we saw that we could set up automations to take is to send runs to an annotation queue. But what exactly is an annotation queue? That's what we're going to cover in this video. So annotation queues are a user-friendly way to quickly cycle through and annotate data. And so let's see exactly what that means. So if I go to Langsmith and I go to this little sidebar on the left, I can see that I have on the second item it's this annotation queues tab. So if I go to Langsmith and I go to this little sidebar on the left, I can see that I have on the second item, it's this annotation queues tab. So I can click on that and I can see a list of all the annotation queues that I've created. If I click into one, I'm then presented with a kind of like user-friendly way to annotate these runs. So let's break down what I have going on here. Here I have the inputs and the outputs of the run. And so this right here is actually the annotation queue we have set up for the magic filter thing. So if you remember, if you go all the way back to the filtering video, we have a way to automatically kind of like create filters from natural language. And that's what's going on here. So here I can see that I have inputs, I have outputs, and then there's a bunch of stuff I can do. So let me break that down. The first thing I can do is view the run. So if I click view run, I open up a sidebar and I get this very familiar looking debugger tracer mode where I can see everything that's going on. I can see what project it came from, and if I want to, I can navigate back to that project. Up in the top, I have a sense of how many runs I have left to annotate. And so we'll walk through going through the annotation queue, but this provides an overview of how many runs are actually present. Over here is where I can leave feedback. So I can leave feedback tags by clicking on a tag here. I can add a new one or choose from any of the ones that I've set up. I can also add a note. So here it looks like the user input was actually already a query, so maybe I'll say something like already a query and leave a note'll say something like already a query. And leave a note just for anyone in the future to kind of like come and look at. And all of these are saved with this run. So if I query this run programmatically or anything like that, they're all attached to it. So that's leaving feedback. And now let's talk about cycling through these data points. So there's a few actions that I can take for this data point. First up, I can add it to a data set. So if I click add to data set, I can then choose the data set that I want to add it to. I can also edit it before I do that. So these blocks here are editable. So if I do something like runs with name text to playbook or something like that. I can then click Add to Dataset, and it will add the edited version. So this is really useful when you send, and I'll talk about automations and some of the automations you can do, but you can send runs with negative feedback here and then you can correct the output to something that it should be and then you can send that corrected output to a data set other things that I can do I can move this data point to the end and so that just moves it to the end of the queue so it will move it back here so if I click this you'll notice that the number of things in the queue doesn't change but I go to the next data point. Alternatively I could click done and this will mark this run as done and you'll see that this number will actually go down. I can also cycle through the runs here so here it looks like basically the same thing was sent a bunch this was probably because we were testing something out. But here I can get to some real runs. And so I can see that I can navigate with these buttons forward and backward. And so if I want to just jump through and see what's going on, this is a user-friendly way to do that. So that's an overview of annotation queues. And remember, these can be used in automations. So if I go back to my project and I set up an automation, I can click here in action, send to annotation queue, and I can select an annotation queue to send this subset of runs to, or I can create a new queue. That's it for annotation queues. Thanks for watching.
Annotation Queues: Efficiently manage data review with feedback tools
260
LangChain
20240402
If you’re trying to improve user experience, you may want to drill into traces with negative feedback or gather traces with positive feedback to use as few shot examples. With LangSmith, you can define filtering parameters and automatically send traces to an Annotation Queue. Annotation Queues allow you to closely inspect traces and tag them with different criteria, helping you categorize and layer on human feedback. Documentation: https://docs.smith.langchain.com/monitoring/faq/annotation_queue Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:44:35.748935
https://www.youtube.com/watch?v=4NbV44E-hCU
The last feature that I want to cover as part of this production monitoring and automation series is online evaluation. And this is a really cool and much requested feature that we've heard and so I'm really excited to dive into this. The basic idea of online evaluation is applying a prompt plus an LLM to assign feedback automatically to data points in production. And so I'm going to show how to do that. This is the documentation here, and I'll have a link for this in the description below. But to start, I'm going to jump back over to this familiar project that is ChatLangchain. And so these are all the runs we have coming in. And so one of the automations that we've set up is we want to tag all runs that have negative feedback with a vagueness tag. And so basically the reason for that is we want to look at all runs with negative feedback and basically determine whether it's gotten negative feedback because the question was vague or because the response was wrong. And so we could look at all data points by hand, but instead we're going to have an LLm do that and assign this tag and that's going to give us a good indication so the first part of that is setting up a filter and so this is covered in in the filtering video as well as the automations bit but basically i'm going to set up a filter for where feedback user score is zero so i set up this filter now i'm going to add this automation. I'm going to add something like vagueness as the name. The sampling rate I'm going to set to one. So I want to run this over all data points. And that's because Chatling Chain has a manageable amount of data points with negative feedback. And then I'm going to select this online evaluation component. And then I'm going to select this online evaluation component. So when I select this, I get this little button here called Create Evaluator. And so I then open up this tab here and I can see a few things. First, I can see the secrets and API keys associated with this online evaluator. So remember, this is using a language model. And so we need to specify API keys for this language model. So if I click in here I can see that I can specify my OpenAI key right here. I can then choose the model. So here it's using GPT 3.5 Turbo. I can change it to any of the provided ones if I want. I can also change the temperature should I choose. The main interesting part comes in when you specify the prompt. So when I go here, I can click set in line prompt and I can get this template that pops up. And there's a few things to note here. First is the template. So the template has two input variables and it doesn't need to use these, but it should use only these two, because basically what's happening is we're going to fetch the data from the run, and we're going to pass it into this prompt template, format that into a message, and then we're going to pass that to the language model. So here I can see that input and output are the two prompt variables, and those are exactly what I should be using. And these represent the inputs and outputs respectively of the run. And so if the input and outputs are nested, if they're a dictionary with multiple keys or something, that'll be rendered as a dictionary as well. So keep that in mind when you're designing this prompt. So here, if I want, I can go in and I can change the prompt. And then the other really important thing that I can do is I can actually attach a schema to this. And so this is important to understand. The schema has a series of arguments. Each argument will end up being a metadata key that we attach to the run when it's finished. So here I want to specify vagueness. So I'm going to change the name of this from correctness to vagueness. And this name, this name of this key, this is what will show up on my run as feedback. Then I'm also going to change the description. So like, is the user input vague or not? I can mark this as required or not. So if I want to like optionally kind of like let the LLM leave feedback, I can unclick this. And then I can choose the type. So boolean, string, numbers, nulls, integers, objects, arrays. I can choose all of that. And this will be the type of the value that's left as feedback. I can also add other things so if I want to do vagueness and correctness in the same thing I definitely could and these can be different types as well and then when that's all finished I can hit save and it will save this rule or I'll save this online evaluator as a rule that gets run over this part of, over these sampled data points. I can see that I've set this up here, and so I've already done this, and I can see that I get the vagueness tags coming in. And so if I filter in to a particular subset of runs, so here I can see, so this is a run. If I look at feedback, I can see that it had a user score of zero, and I can see that it also had this vagueness component of one. And this is actually left by the online evaluator. I've also set up another evaluator that randomly samples data points and tags them according to different categories. So this was labeled as conceptual. So hopefully this shows a few interesting ways that you can use these online evaluators to automatically tag, look at, classify, provide more insights into the various inputs and outputs that I have coming into my system.
Online Evaluation: Simplifying assessment of LLM responses
348
LangChain
20240402
While it may be tough for a human to look at a large amount of data — it's quite easy for a language model! We’re excited to introduce Online Evaluation, which allows you to set a LLM and define a prompt to run over production traces. Online Evaluation enables you to more easily classify and spot qualitative trends or drift in your application performance. You can evaluate traces based on criteria such as “insensitivity” if the LLM responded in an insensitive manner or give it a “topic” category to summarize the content of the response — the criteria is completely customizable. Documentation: https://docs.smith.langchain.com/monitoring/faq/online_evaluation Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:45:23.896982
https://www.youtube.com/watch?v=WODgxh_wGTY
In this video, I want to walk through a few of the common workflows that we see around for how to use automations best. And so they're on this use cases page in the documentation here. And these are quick, high level overviews of those workflows. So let's jump into the first one, which is basically just sending bad data points into an annotation queue. And so the idea here is that you generally want to do this so that you as a human can look at things by eye. So the way that I would do that is I'd go into my project. I'll use the chat link chain project here. And I'd set up a filter to capture all runs that have bad feedback. And so I'd add a filter based on feedback. Here I know that user score is the one that I'm collecting from the user, and I'll filter to ones where it's zero. Once I've set up that filter, I'll then go into the rule. I'll say something like bad feedback. Sampling rate is generally good to have at B1, and this is because I want to look at all data points although if I don't want to look at all data points I could set it to 0.5 or something like that. I'll then click on annotation queue, maybe I'll create a new queue that's like bad chat lane chain runs and then I'll send things there. Then when things go into that annotation queue, I can look at them. Honestly, the main value to start is just like seeing what types of data points we're getting wrong. And then from there, I can leave more precise feedback. I can correct them and add them to a data set or something like that. The next use case I want to talk about is sending data points with positive feedback to a data set. So this is useful because we can assume that these data points are good, they're good representative examples, and that's great. We can use that as an example to test against to make sure that when we make prompt changes in the future we still do well. We can use that as an example to use in a few shot examples or fine tuning to improve the application. So here, I'll go in, I'll change, it's basically the same as before, but I'm just flipping the user score to one. And so now this is positive. I'll add a rule and this is like positive chat LC. Generally we'll take all of them. Again, you can change it if you want and then I'll send it to a data set. And so maybe I'll create a new data set and I'll have it be something like good chat LC examples. And then I'll select key value because I know that the inputs and outputs of these are usually dictionaries. And I will create it and save it. And boom, now I'm going to start building up this data set over time of examples with positive feedback that I can use in a variety of ways. The next one is sending child's runs whose trace got positive feedback to a dataset. Okay, so what does this mean? So this means if I go in here, let me remove this filter. If I go in here, let me click on one, I can see that there's a lot of runs here going on under the hood. And so what oftentimes happens is that I want to like this, this high level thing, it's great to have examples of that. But when it comes time to optimize what I really need, and by optimizing me, like use few shot examples or update the prompts, I really need few shot examples out at one of these levels. Cause like this high level input and output, it doesn't help me at this step where I'm kind of like generating an input and output. And so here we can see the condensed question. We can see that we get as input questions and then this chat history stuff, and then we get out this response. And so this is the inputs and outputs that I need in order to optimize this kind of like node of my chain or of my graph. And so this is really what I want to be building up examples for if my chain is more than one step. And if I want to be optimizing the individual nodes. Again, there's value in getting the end-to-end data set, but there's also a lot of value in getting data sets for the specific nodes. Again, there's value in getting the end-to-end data set, but there's also a lot of value in getting data sets for the specific nodes. So let's see how to do that. So here I'm going to remove this because I no longer want the top runs. I want the child runs. And so I want, I'm going to filter by name. I find that usually the easiest way to do it. And I'm going to filter by name. I find that usually the easiest way to do it. And I can see that it's condensed question. Filter there, I get all the condensed questions. And now what I want is I want the these, I want only the runs where the parent trace got positive feedback. And I'm basically assuming that if it got positive feedback at the end-to-end thing, then the step here is correct. And, you know, that's not 100% a good assumption to make, but I think it's a pretty solid assumption to make. Go to advanced filters. We can add a trace filter. And then we add the feedback. User score is 1 here. And so now I have this filter set up where I'm looking at condensed questions where it got positive feedback on the end-to-end thing. And just to make that even more clear, let me add, actually this automatically applies to the top level trace, so that's fine. Now from here, I can do the same thing as before. I can do like condense condense question data set I can send this to a data set and create a new data set. And boom! There we go. So now I'm going to start building up a data set of examples for this particular node in the graph. And so that's really powerful because, again, those are what I need to kind of optimize that call to a language model. It doesn't do me a ton of good to have a call end-to-end. That call at the node is what I want to optimize. The final one is just sending random data points to an annotation queue. And so this is good for just like making sure that you're seeing a variety of data points. So, you know, it's great to look at bad data points, but you want to keep on looking at, you know, you could just not be getting user feedback. That's very common. We don't get a lot of user feedback on chat link chain and so being able to just send random data points and then look at those random data points is pretty valuable and so what I can do is I'll just go back in I'll add some rule random data points I'm gonna set a lower sampling rate here because, you know, I don't want to look at all data points. I want to look at a random subset. I'll say like, yeah, sure, why not? Why not 5%? Send it to an annotation queue. I'll create a new queue for this. I'll go like random questions from chat link chain, and then I'll create that. And again, the idea here is, I'm just looking at a random subset. It's good for me to get an idea of what types of questions people are asking and stuff like that. So those are some of the high level common flows that we see people doing. I'm gonna dive into a few specific ones in the next videos.
Common Use Cases: Practical applications of LangSmith automation features
465
LangChain
20240402
This video explores common use cases of the workflows covered in previous videos in the series, focusing on optimizing data quality through automated rules and human oversight. Documentation: https://docs.smith.langchain.com/monitoring/use_cases Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:46:35.234726
https://www.youtube.com/watch?v=827QeizQbgU
So in this video, I'm going to walk through a more end-to-end use case of using LingSmith automations to optimize your application over time by constructing a few-shot example data sets. And so the example application that I'm going to build is an application that writes tweets for me. to build is an application that writes tweets for me. And so, you know, I like my tweets written in a particular style. And so I'm gonna build an application that can do that. And I'm going to show how to optimize that via a combination of user feedback and human annotation queue over time to build an application that writes more in the style that I like. So let's get set up. We can import, we'll use OpenAI for this. We can import Langsmith. And I'll create this really simple function. It's called tweeter. It takes in a topic, and it basically just calls OpenAI with this really kind of like just standard prompt. Write a tweet about topic, and then returns it. What I'm also going to do here is I'm going to pass in the run ID and we'll see why I'm doing this. But basically, this is so that I can specify the run ID ahead of time, which then lets me know how to add feedback for a run. So big part of this, I'm not going to actually create a front end for this. I'm just going to mock it out. But basically, the big part of this is collecting feedback from end users in some different environment um passing that into lanesmith and then using that to to kind of like optimize your application over time so here um i pass in the run id i pass an nba and i get back this tweet just watched an insane nba game those players are on another level hashtag nba hashtag basketball is life and then some emojis. I kind of like this one. Let's pretend for simplicity. I'm going to like tweets where it ends with an emoji. And that's a latent thing that's impacting whether I like the tweet or not. So let me leave kind of positive feedback on that. Let me try again. Let's do soccer. OK, so this doesn't end with an emoji. So even if this is a good tweet, I'm actually going to leave negative feedback on this. Okay, so that's the basic idea of what's going on. Now let's set up some rules in LingSmith that'll help optimize this over time. this over time. So what I'm going to do is I'm going to go into my project and I created a special project if you notice here I set the laying chain project to be optimization and so I'll go into optimization and I'm going to set up a rule that takes everything with positive feedback and adds it to a data set. So let me go into this filter I'm gonna add this filter feedback user score is one I'm gonna add a rule tweeting optimization sample rate of one I'm gonna send this to a data set data set name tweeting let me create a new data set tweeting optimization and let's create that let's save one about the NFL. And it doesn't end in an emoji, so I'm going to leave negative feedback or no feedback. At this point, I'll show it how to incorporate negative feedback later on, but for now, let's not leave any feedback. Let's do NDA again. Still no emoji. Let's keep on iterating until I get one that ends in a... All right, let's try to get one that ends in an emoji. All right, so this is proving a little bit more difficult than we go. So this ends with some emojis. So I'm going to leave now positive feedback on this. one that ends with an emoji. All right, this one ends with an emoji. Let's leave positive feedback on this. So now, what's going to happen is these things that I left positive feedback for, they'll start to get added to a dataset over time. And so these automations run every minute, and so I'm going to need to give it a little bit, but I should be able to go to a dataset and start to see these things pop up as examples in the dataset. So here we are in datasets and testing. I can search for tweet optimization and I can see that I have my two examples here. So if I click in I can see the input and the output and I can do the same for over here and I can see that they're the ones that end in emoji. So now what I want to do is I want to make my prompt a few shot example prompt and I want to start pulling these examples in and using them as examples for my application. Okay, so back in this notebook I am going to pull down examples from this dataset. So I'm going to use the Langsmith client, I'm going to list examples, set the dataset name equals to tweeting optimization, and again this is what I named the dataset. And I can run this and I can get my examples, which right now are two. And I can see that I have a bunch of information about this example. And the most important part is the inputs and the outputs. So I have here the inputs, topic soccer, and outputs, and then I have an output key and it's this tweet. So what I want to do is I want to use these as few-shot examples in a prompt. And so as part of that, this is what it could look like. So let's say, you know, we could take these examples and put them into some string like this. And so we'll have kind of like this input output pairing and then let's recreate our tweeting optimizer and here inside we'll actually pull down the examples so we'll refresh this each time this is maybe a little bit overkill because this will be another network call so you could do this outside but for this example I'm going to do it inside this function I'm going to create this string, I'm going to do it inside this function, I'm going to create this string, and I'm going to edit my prompt. So it's still going to say write a tweet about topic, but then I'm adding these new things. Here are some examples of how to do this well. And then I pass in this example string up here. And so hopefully, what we'll start to see as we give a few shot examples, it starts to implicitly kind of like learn what kinds of tweets I like and don't like. And so if we run it here and ask it to write a tweet about the Oscars, okay, awesome. So it added some emojis to the end. And so I can give positive feedback on that, and it starts to pick that up. One thing that I want to show now is how to do this same thing, but start incorporating kind of like negative feedback. And so there's actually a few like interesting ways that you could do this. You could create the same automation and basically send all rows with negative feedback to another data set and then include those in the prompt and be like, these are examples of tweets that the user did not like. So that's one thing you could do. But for a little bit more of variety, I'm going to actually send negative tweets to an annotation queue and then manually kind of like edit this. And so this shows kind of like the human in the loop component. So maybe, so let's see. So here, let me run this a few times until I get a tweet that I don't actually like let me change the topic to something like AI okay so here it doesn't end in an emoji so great I'm gonna leave negative feedback on that then what I'm gonna do is I'm going to go into my tweeting optimization project. I'm going to set up a filter for where my feedback user score is zero. And then I'm going to add a rule. Negative feedback. And I'm going to send this to an annotation queue I'm going to create a new queue which is tweeting optimization let me create that let me save that and awesome okay so now when I go to my annotation queue I should start to see these runs with negative feedback show up. So here we are in the annotation queues and I can click into the Tweeting Optimization annotation queue I just created. And here is the negative run that I got. And actually there's four of them because I gave some downloads before and those showed up. So here what I'm going to do is I'm going to edit them and then add them to a data set. So what I can do is I can just like edit this, delete that, now ends in emoji. Awesome. Now I can add it to a data set. Perfect. Done with that one. Go on to the next one. I'm going one gonna edit this one correct this to what I want it to be add to a data set let's add it to there we go keep on going all right this one doesn't have any so I'm going to remove uh the hash actually you know what I'm just going to skip this one. So I'm just going to be done with this without even adding it to a dataset. And here, boom, add this to a dataset. We're done. We're all caught up. Awesome. So now if I go back to the notebook, I can start to pull down the examples again. And I can see now I have a lot more examples. And if I run this updated tweeter anymore, let's choose a new topic like humans, it ends in emoji. If I go back to Langsmith I can go to my project, I can go to my optimization project, I can click into here, and yeah, this is the run that I had. So this is how we built an application that sets up links with automations that takes user feedback, takes ones with good user feedback, automatically adds it to a dataset. Takes one with bad user feedback, automatically adds it to an annotation queue. A human can then go in and correct it and it starts to build up this data set and this data set I'm then plugging back into the application and it's using that in future iterations to start tweeting in in this case tweeting but in your case it could be whatever application you're building it's optimizing that performance and making it better over time and so this is one use case that we're really, really excited about, and we built a lot of the functionality specifically to enable this.
Optimization Use Case: Build a style-adaptive app with LangSmith automations
693
LangChain
20240402
All feedback is helpful, and when your users provide positive or negative feedback, you should leverage that information to improve future app interactions. Now with LangSmith, you can use automations to create valuable datasets from user feedback and integrate those data points as few-shot examples to improve your prompting strategy in your application. This use case walkthrough illustrates developing an application that learns to tweet in a style aligned with user preferences. Documentation: https://docs.smith.langchain.com/monitoring/use_cases/optimization Blog post: https://blog.langchain.dev/langsmith-production-logging-automations/
2024-06-10T11:48:04.640416
https://www.youtube.com/watch?v=VWdRQL0CsAk
Welcome to the Enterprise Model Management course. Before I jump into the outline of the course or any introductions, I want to motivate this course and tell you a little bit about why we created it. And I want to do that with an example. So this is a real pull request from a project that I was involved with in open source. This is from the Kubeflow project and our colleague opens a pull request that changes parameters for a model. So in this case he changes the size of the hidden layer, the learning rate strategy, so on and so forth. The person reviewing the pull request says, okay, that's great. The code looks good. But do these new parameters affect the model's performance? And the original author says, yes, I think it does. I think it does improve the performance. But this is just guessing you know we're having a conversation in a pull request and nobody really knows we're just kind of going on faith that that yeah this change affects the model in a positive way and We don't want to do this. We don't want to guess. When you're writing code, you would never write code like this. If you want to change some code, you want to know whether the code is passing tests, whether the code works or not. This has been a big problem in machine learning for a really long time. For the longest time, I actually created lots of different automations that would bring in tests for machine learning into the PR. I kind of created all these tools and all this stuff because this kind of stuff drove me crazy. And thankfully, now we have great tools, and especially Weights and Biases has created lots of different automations and something around something called the model registry, which we'll get into. That makes it to where you don't have to guess and you have a way to have a single source of truth for your models and you have visibility into the model changes so you don't have to review models like this and you don't have to guess and so I'm really excited about this course because we're going to show you tools that will save you a lot of time and that will prevent you from having to do a lot of guesswork. The next thing I want to do is go over the learning objectives for this course. So first, we're going to talk about what enterprise model management is and what are some of the important concepts of model management. Next thing we're going to do is talk about the central feature that enables model management, which is the weights and biases model registry. Next, we're going to talk about my favorite part, which is how to automate your workflows with the model registry with features like webhooks and launch jobs. And this is going to save you lots of time. Next, we're going to get into practical examples of using the model registry with automation, including integration with external ML systems. I just want to take a moment to introduce myself. So my name is Hamil Hussain. I have over 20 years of experience deploying machine learning in various industries. I focused my career a lot on machine learning infrastructure and tools. I've spent a lot of time creating projects in open source. I've been involved heavily with projects such as FastAI, Jupyter, and Kubeflow, either as a core contributor or as a lead developer. And you can find more about me on my website, hamil.dev. So the next thing we're going to get into is the intro to the model registry. And to do that, I'm going to hand it off to my talented colleague, Noah Schwartz, who is the product manager at Weights and Biases. And she's going to talk to you in detail about the model registry. And she is the world's expert on the Weights and Biases model registry. And she's going to tell you everything you would want to know about it and kind of all the moving pieces. So I'll hand it over to her.
Welcome to the Model CI/CD course!
286
Weights & Biases
20240528
First, lets get into why model management is such an important topic, to motive it let me show you a real-life example from one of my recent projects. In this course, you will learn how to avoid such guesswork and the consequences that follow it by learning the model management techniques, completing your coursework and getting a certificate. Learn more about Hamel Husain at hamel.dev.
2024-06-10T11:54:02.701326
https://www.youtube.com/watch?v=PBk2AS_FGMY
Hey, well thank you for the super warm introduction. I'm super excited to be here today to talk through what I personally think is one of the most important product areas in Weights and Biases for enterprise teams working on model development. But I'm biased because I'm one of the product managers here at Weights & Biases, and I'm primarily focused on a bunch of different areas, but primarily artifacts, model registry, and some of the features that we will be talking through around automations. Before coming to Weights & Biases, I was a product manager at Scale.ai and did work around synthetic data. So I'm excited to kind of take my learnings from, you know, managing complex models and data sets and walking through some of the stuff that we've been doing here at Weights and Biases to make it easier to manage models in production. Cool. So just to set the stage on where the model registry becomes relevant within a machine learning team's workflow. So you can see here that on the left hand side of the screen, we have kind of our experimentation. This is where folks like data scientists, machine learning engineers are super involved in this workflow of creating models, evaluating them, packaging them. And then on the right-hand side is kind of the second part that's more focused on deploying models to production, releasing them, and monitoring them. So to walk quickly through what a practitioner's workflow might look like. And we'll go more into detail a bit later, but you can imagine you have your data scientist or machine learning engineer that's developing these models in the experimentation process. They're producing hundreds of training and evaluation runs to try to improve a model's performance. And then eventually we get to the kind of part around like deployment, where a new model that comes from all of those hundreds of training runs is eventually released. And in the middle, we can see the staging step. So very similar to the world of DevOps, staging is the step in between where a practitioner says, hey, I have a new candidate for release, and we need the model registry to house these staged models and manage all of the workflows associated with candidate models. So again, left-hand side, very data science, research heavy, and then the right is kind of all the work that's being done to keep the product running and operational. And the goal of the model registry is to be this kind of centralized point as models pass from left to right. So one thing I wanted to spend some time talking about is, you know, are we inventing the wheel here or where have we seen this in other workflows? And I think the best example is how do we solve this problem in software now? So I think we can really think of, you know, the model registry and you'll notice similarities in terminology here, but it's kind of like a container registry, like Docker Hub. So a developer is publishing an image to this container registry, and anybody can download and consume this image, obviously, if they have the appropriate permissions to be doing so. And there's a few other pieces of similarity. So, you know, versioning and tagging of these images, each container image is versioned and kind of has its own tag. This piece is crucial for kind of how you maintain differentiation between versions of an application or a service. And then also kind of how do you ensure that specific versions can be deployed or rolled back if needed. Image distribution, so kind of enabling the distribution of container images across many different environments. And the same theme that we see around collaboration, so teams can now share and collaborate on a standardized container image. The other two pieces to mention, which we'll dive more into how they're relevant to model registry, but we see patterns around using these container registries for access control and integration with CI-CD. The other example that I have here on the bottom right again is a different type of registry that we're familiar with from the world of software. So similar to Python package registries like PyPy, So similar to Python package registries like PyPy, we have this public registry of Python libraries, and this facilitates the distribution of Python packages. So developers can easily share their libraries and kind of post them to this registry. And then anyone in the community, in the broader Python community, can kind of use them and install them. So across these two examples and the patterns we saw around publishing and using in model registry, what do all these things have in common? And then what do these software solutions enable? So they enable a way for developers to build and package code and publish usable versions of their code to a central repository so that others on the consumer side, so that might be a dev team, it might be an ops team, can go ahead and consume these models directly. Or, sorry, in this case code, but in the world of model registry, consume models directly. Nice. So I'm going to spend some time talking through, you know, you're an enterprise team, you have a bunch of machine learning engineers, and you know, you're putting models into production. Why should you care about this tool called a model registry and what are kind of the big, the most important highlights and benefits of implementing one into your team's model CICD workflow. So the first piece is abstracting out the messiness of experimentation. So you can see here that on the left-hand side, this is kind of a screenshot of our Artifacts product, which really at a project level is storing all of the different model checkpoints that are being created. So you have a bunch of different runs, and even within those runs, you know, you might have hundreds of different checkpoints and you need a way to kind of post the checkpoints that are candidates for production so that whoever needs to consume it, so like the MLOps engineer or the team lead that's trying to see the latest and greatest model or the product manager isn't sifting through hundreds of checkpoints and trying to figure out, hey, which was the one that you wanted me to put into production? The second piece is on governance and control. So being able to answer questions for models that are in production, who promoted this model? When did they promote it? What was the exact version of the dataset used? What kind of post-processing happened to this dataset before model training? So a production model is going to be served to users, and the implications of a model that's deployed without proper approvals are a lot greater. There's a lot more at stake than publishing to a conference papers. So one example that I wanted to pull in from one of the customers that we're working with closely, they're a large enterprise customer and they're putting large generative AI models into production and they have kind of requirements from their legal and compliance teams and need to be able to prove that for the models that are putting into production, each model that's being deployed can be traced back to prove that it was trained on a data set that was licensed or public and that the company has permissions to be training these generative models on a specific data set. And that's where the story of lineage becomes really important that we saw in this slide here. This is kind of one snapshot, but having anyone from the compliance or security or audit team being able to say, hey, this is the model in production. I want to be able to work backwards myself and see that the data set you used to train it on is licensed so that we don't get in trouble with legal implications downstream. And then this is kind of a nice comic that I've been referencing a lot when talking to customers, but you'd be shocked at how many customers we talk to regularly that when they first start using the product, one of the biggest questions that they have a tough time answering is, I have a model in production, I don't know which exact snapshot of data was used. So yeah, that's on the piece of kind of governance and control. And then the last story is around accelerating model rollout. And this becomes relevant, if we think of that kind of graphic that we saw in the beginning around we have experimentation on the left-hand side and we have production on the right-hand side. Typically within large enterprise teams, there's different cohorts or different types of folks that are responsible for doing the experimentation and then the folks deploying these models to production. And the personas, we'll dig into a little bit more of that in the next slide, but what we see there is this critical handoff point that without a model registry is super slow, frustrating, and error-prone. So again, we see kind of these scary questions like, or comments, hey, like, I think you deployed, you know, the wrong model. I sent the wrong file. Oh, well, I didn't know this new model was even ready. Oh, I kind of deployed this model because I thought that it was ready to be deployed. And that's what, you know, Person XYZ told me. So the more time practitioners are spending doing this manual process of handing off to the MLOps team, the less time they're spending on their work on model improvement. And then the same story goes for folks on the MLOps side. So they're spending kind of a bunch of time trying to find the right model and then rolling it back in case a mistake was made and the wrong model was deployed. And in the end of the day, you know, the longer it takes end users to use the current best model, this is kind of dollars lost for the team because there's a new model that's ready to be used. Let's make it as fast as possible to roll this out to the surface. And just to talk a little bit about these personas that I've been throwing in. So I imagine that there's folks who are joining this course who are familiar with the other pieces of the Weights and Biases ecosystem. And one thing to note about registry is that it actually introduces new personas into the puzzle. As I mentioned, it's this handoff point between ML practitioners, so the folks regularly using experiment tracking, creating their runs, logging models, and the ops team that's deploying the model. And so some of the new personas, there's, there's that example of the MLOps team. They're responsible for this handoff and they need to be able to answer what is the best model and, you know, am I ready to be able to deploy it? Then there's the compliance legal security stakeholders that I mentioned in the customer example there. To them, lineage is really important. They want this assurance that the entire history of the model and the ingredients and components, whether that's metadata, which libraries were used, what pre-processing happened, can all be traced. And then lastly, you know, the folks, ML, an exec, a lead who wants this view of all of the models a team is working on. You know, experimentation is super messy, and that piece is more of an implementation detail. What people really need to see is, you know, what are the published versions? And there's a few folks that I also put in here, including, you know, product manager. So this is, you know, the space that I'm in. And as PMs, we're currently, or we're constantly looking for, hey, where is the documentation for, you know, what the models the team is working on? What is up-to-date performance? What are the expected output? So all of this kind of documentation and model card work also needs to be there for this persona. And, you know, before I'll hand it back to Hamil to demo, you know, the product and these APIs live, I first wanted to mentally prepare you as to how easy this really is to do. And that I'm talking about all these kind of like features and important aspects of enterprise model management. You might be thinking this kind of sounds like a mess to set this all up. And kind of my commitment to you is that you can actually do this all log and link your model in one line of code. Three, if you include, you know, installing weights and biases and starting your experiment. you know, installing weights and biases and starting your experiment. And with that being said, I'm going to hand it back to Hamelin and super excited to walk through the product itself, some of the APIs, and transition into the second part of the course where we'll talk about automating workflows for kind of streamlined model management and connecting this model management piece to your CI-CD pipelines.
What is Model Registry?
847
Weights & Biases
20240528
Dive in with Noa, Product Manager at Weights & Biases, to learn what Model Registry is! The definition we will be using in this course is: model registry is a repository of a team's trained models where ML Practitioners publish candidates for production to be consumed by downstream teams and stakeholders.
2024-06-10T11:55:32.983984
https://www.youtube.com/watch?v=t3t49lkPu8Q
Hello. So the next thing I want to talk about is how to actually log a model to the model registry and link a model from a run to the model registry. And before I get into the code and all the different moving parts, I just want to remind you about the documentation. So anytime that you get lost in what I'm saying or you want to know more information a really good resource is the weights and biases Docs it's actually out of all the tools I use I would say weights and biases does a pretty good job of keeping their Docs up to date and so the way I like to use Docs is I like to use on on Mac anyways it's command K and you just search so in this case you know model registry I have caps lock on sorry about that model registry and I like to start with tutorials so in this case register models is a really good place to start. And it has a great walkthrough of real end-to-end example of how to log models and create artifacts and do all the things I'm going to show you today. But more specifically, there's also these docs, which go through almost the exact example that I'm going to show you and kind of explain that from a more of an API perspective of like how what's going on in that code so I highly recommend at least going through this this log models and this is in the guides so there's tutorials also guides so this is under this is one of the guides called Log Models. And please take a look at this. So next, let's jump into the code. So this is the code that we'll be using to link the model to the model registry. So first thing you want to do is install weights and biases. And you want to make sure you install the latest version of weights and biases. So pip install dash u. Next thing you want to do is some setup. So first, we want to attach to a project that we already have or create a new project. So that's what this 1db init is doing. And then we're going to create the artifact. In this case, we're creating a dummy artifact, which is a text file. So what is artifact? An artifact is a file that saves some kind of state about your model that you can use downstream for further training or inference. In this case, we're just creating a dummy file that is just a text file. But that doesn't matter. This is just to make sure that this example is minimal, so you can concentrate on understanding how to log a model and link it to the registry. The next step after you know what asset you have, or where your asset is, is to actually create an artifact. And so you so this is a good API to use one db.logmodel. One db log model is a shortcut for logging an artifact of type model. And so what we do here is we pass in the name of the artifact, which is just model dash 1db.runid, give it the path to the artifact, which in this case is just this text file, and then aliases. Aliases are metadata that you can attach to an artifact that flag the lifecycle of the artifact. You can use it any way you want, but what I recommend is you try to create aliases that sort of give you an idea of where that I like to use are dev, staging, review, production. All of these different aliases are good aliases. I'm going to go ahead and run this code. code and it's going to upload this artifact into the run and it's going to give me a link. Let's go ahead and click this link. Okay, so there's nothing logged to this run except for the artifacts. So I'm going to in this view, so this is the overview of that specific run. Nothing really happened in this run, but there's artifacts. And in this case, we have that artifact that we logged of type model. I can click on that artifact and the next thing that you'll notice here is there's the ability to link this artifact to a registry. Now there's many different ways to link to a registry. There's really two ways. One is with the UI and one is with code. I want to show you how to do it with the UI first just so you get a good mental model of how things work. But feel free to, you know, after you how things work but feel free to you know after you learn how to use the ui you know you can use the code and you'll see here the different aliases that we that we used in our code plus some other default ones so weights and biases will automatically add the latest and have like semantic versioning, alias already added for you. Noah, do you want to talk about these two tags, these ones that are added by default by waste and biases and how that works? Yeah, sure. As Hamel mentioned, the way we like to use aliases are the idea of being able to attach aliases to model version is to serve as semantically friendly identifiers for a specific model version. So you don't necessarily have to remember, oh, this is the third version of this artifact that I'm working on. So in this specific case, our artifact only has one version. But you can imagine a case where you've done a bunch of versioning, maybe tuned some of the hyperparameters, or you're kind of logging a model at every checkpoint. And so you might have an artifact with 100 checkpoints. And so you might have an artifact with 100 checkpoints. And aliases are a way to use, again, a semantically friendly identifier so that later in your code you can reference the artifact rather than remembering the specific version number. So latest is one of the default aliases and we add aliases and essentially make sure that there's always a pointer to the latest alias. The latest alias serves as a pointer to the most recent model version that you added to an artifact. And so this way, you know, wherever, if you're referencing or using this model in your code, you can use the latest alias to make sure that as the model is being versioned, that there's kind of always a way to make sure that you're pointing to the most recent version. And the other alias here that we also add, again, by default is what you'll see, like the v, v zero. So as you increase the number of versions, we're gonna keep count and automatically increment as any of the files change in your artifact, we'll increment kind of to the next version.
Logging Models
479
Weights & Biases
20240528
Here is the Model Registry documentation (http://wandb.me/ModelRegistry), where you can find code examples, explanations, tutorials and many more valuable insights. Before we dive in, to set the stage, we will be starting with the Log Models section. (http://wandb.me/LogModels) To try it yourself, follow my code here. (http://wandb.me/LogModelDemo) First, we will install wandb and create our first artifact, which in this case will be a logged model. To this artifact we will add aliases, I like to use aliases that indicate the model stage for example 'staging', 'production' etc. You can use them however they fit into your process!
2024-06-10T11:56:25.527231
https://www.youtube.com/watch?v=9ZPjqJctmMA
So just to show the code again, I mean, this is how we created the artifact. The next thing we want to do is we want to link the model. And link the model to create a... So we're going to click on this link to model registry. And then we're going to select a registered model. So this is a bit of terminology I want to kind of clarify a little bit. Noah, do you want to explain what is exactly the difference between model registry and registered model? And what is a registered model in this case? Yeah, so you can think of the model registry as kind of the entire repository of all of your models. It's like the big bucket where all of your pinned or bookmarked model checkpoints will go. And registered model is, you know, within a model registry, you might have a bunch of different registered models. Registered models typically represent a specific task, a machine learning task that your team is working on, where people from different projects are publishing their best checkpoints for that project. So for example, if you have a team that's working, we have one customer that we've been working with that uses the model registry, they have different models for like depth estimation for kind of underground like manufacturing. And so you can imagine that like one registered model there represents depth estimation. If you're working with ensemble models, then, you know, we might have each different, you know, task within that ensemble model will have its own registered model. So you can really think as a registered model as being synonymous to an ML task. And, you know, people from different projects across your team might be working on those tasks in their own projects, but they're linking to this team registry. So we have this collection of the very best model checkpoints across all projects for that ml task it's essentially an extra layer of hierarchy to organize um pinned or bookmarked or linked models in in specific almost folders a folder of versions that are all working towards the same ML task. Great. That's really helpful. So just to reiterate, registered model is a task. And so when you click this link here, this link to registry, it's going to give you a dropdown of all of the different registered models or tasks that you might want to attach this artifact to. In this case, I have two of them. Now, let's create a new one. You know, I have these from prior projects. But I want to create a new one. And you can do that here. And I'm going to call it, you know, my ML task. For example, it's going to be of type model, For example, it's going to be of type model. I'm going to say this is a demo, for example, like that. I'm not going to talk about this tags box, so we're going to just skip over that. And I'm going to go ahead. So this is very convenient. I'm going to go ahead and link that model artifact that I just had to this task. So this task is my ML task. I can create more aliases on the fly. Let's say I get to this point and I think to myself, hmm, this best in dev alias they added, you know, I really would like another type of alias. Like, you know what, at this point I would really like to add the staging alias. This is like like a one last one last chance in a way. You can still add the alias afterwards but this is kind of like just a way that you can just sort of add another alias at this point if you want. And then you can say link to registry, which is really, you know, creating that registered model. And then you can go into view registry, view and registry. So you're viewing the registered model in the registry. And you can see the different aliases, for example. And what you have here is this is the model. This is actually a pointer. So this is a registered model, and this points back to the artifact. And if you click View, you can get sort of all the information. We're going to go through the UI very carefully in the next section, but I just want to show you what it looks like. That's how you link a model to a registry. What I just showed is you can do that in code as well. To do the same thing of linking to a registry in code is you can run this code right here in this cell. And so the first thing you want to do is get the artifact name. Now notice this colon best. Noah, do you want to talk about what this colon best means in this specific situation? this colon best means in this specific situation? Yeah, so this is a really good example of an alias being used to reference an artifact. So when you're pulling down an artifact from the registry or whether it's from directly from your project, then instead of referencing which version you want by using the version number like V27, then this would be a good place if you want to always be linking, for example, the latest version, or you always want to be linking the model version that performed the best, then you can add colon and the alias. And that again lets you use kind of a semantically friendly reference using terms that we typically encourage to be like standardized across your team. And one thing to note is that, you know, as Hemal mentioned, we when we're linking artifacts or we're linking model artifacts to the registry and we're attaching aliases there. It's a great place to be adding kind of these semantically friendly keywords that reference where the model is in its lifecycle. So oftentimes I've worked with teams that when they're first adding when they're linking a model to the registry, they might add the alias, you know, like needs QA or review or evaluation to mark it as, hey, this just entered the staging pipeline. Now, after going through a round of testing, the alias could change, you know, to reflect that the format might have moved into being quantized. So we've seen some customers add an alias like quantize. So that might also represent, oh, it's in that stage of the pipeline and it's moving closer to production. Or if in the process it's decided that a model didn't pass the tests, an alias archived might be added. That's another option there. And that way teams know to kind of clean it out of their registered model or their task. So aliases are a great way to programmatically kind of reference the artifact that you're looking for using a term that's kind of friendly rather than memorizing the version number. Thanks, Noah. That's really helpful. And I want to mention with aliases, only one model in a specific model registry or in a registered model collection can have the alias. So let's say you can't have two latest models. And that's really useful so that you can, they're kind of they're unique those aliases are unique and allows you to manage the life cycle um and also want to say that it might seem so it might seem a little bit cumbersome that you know i'm adding aliases and doing all this stuff but actually what we're going to show later on is how to build lots of automation on top of this and how to glue other systems and you can actually automate a lot of this stuff you can automate you can you know because you have apis and because of some other features we'll get into uh you'll be you can we'll show you how you can like automate kind of moving models through different parts of the workflow uh with a human in the loop so it's not at all complicated and it's very nice and it's easy to use. Then I would also want to mention, this is a very simple minimal example. We're going to have a later section in the course where we go through an end-to-end example with the real ML use case of doing this whole process. Just to solidify this and give you more intuition of like how this works in a real ml project so just to conclude like you know this is how you link a model this is exactly what i did in the ui is i took that model and i linked it as a registered model and uh you know this is the same name I gave it. Remember, if you recall, I added that staging tag and that's all you have to do to link a model to the registry.
Linking Models to Registry
575
Weights & Biases
20240528
This lesson shows you can link your model from your code or in the UI. Here is some helpful terminology to make you sound like the model management expert, follow the links for more details and examples: Model version - A model version represents a single model checkpoint. Model versions are a snapshot at a point in time of a model and its files within an experiment. Model alias - Model aliases are mutable strings that allow you to uniquely identify or reference a model version in your registered model with a semantically-related identifier. You can only assign an alias to one version of a registered model. This is because an alias should refer to a unique version when used programmatically. It also allows aliases to be used to capture a model's state (champion, candidate, production). Model tags - Model tags are keywords or labels that belong to one or more registered models. Use model tags to organize registered models into categories and to search over those categories in the Model Registry's search bar. Model artifact - A model artifact is a collection of logged model versions. Model versions are stored in a model artifact in the order they are logged to the model artifact. A model artifact can contain one or more model versions. A model artifact can be empty if no model versions are logged to it. Registered model - A registered model is a collection of pointers (links) to model versions. You can think of a registered model as a folder of "bookmarks" of candidate models for the same ML task. Each "bookmark" of a registered model is a pointer to a model version that belongs to a model artifact. You can use model tags to group your registered models.
2024-06-10T11:57:37.886357
https://www.youtube.com/watch?v=AZ-8djQfsRg
Hello. Now that you've seen how to link a model to the model registry with code, I want to go over the UI a bit and help orient you to the model registry and the various components within it in the UI, because it's actually very useful. So to see the model registry from your homepage, in the left-hand side, you'll see a list of your projects, you'll see applications, and you'll see this model registry. And one way you can see all of your model, your registered model. So this model registry is a specific product in Weights and Biases, and the model registry has a list of registered models. And again, the registered models are just tasks. These are different tasks that represent, you know, different use cases that you might have, that you're trying to tackle with, you know, specific models. And if you recall in the last video, we created this MyML task. Now I want to show you a little bit more of a professional example. And so I'm going to go to this registered model here. And this is this fine tune, this fine tuning LLM model that we're working on. And what you see here is all the different versions of this model you can see the different aliases see what is in what's the latest a model which one is in production which one's at staging so on and so forth we can click view details and view details this is where you can see the model card and the model card is really helpful, especially like if you're working on teams and there's lots of different models, there's lots of different people collaborating. And you want a look like a central place where you have information about the registered model, about the task really that you're trying to tackle. And so, you know, things like, you know, as much detail as you think is helpful, but it's helpful to actually standardize this across your team, across your company, so that when people see a model card, they know, they know what they're looking at. They have like a kind of lay of the land. They know, okay, like where to find information and like different people do this differently if you want like a good template or good like mental model this is like somewhat similar to the idea on of hugging face model cards is like hey give you like all the information that you might think are relevant about a model so in this case what we have here is model details, you have like you know what the training data is, how long it took, the hardware, the intended use case, some performance metrics, you know limitations, ethical considerations, things like that. Now we have a more detailed view of aversions. Now we can get into things like who logged these models. We can get into some of the more important metrics about the models. And, you know, this is a very helpful view to kind of quickly scan all the versions. And then below this, we'll have automations. So this lists the different types of automations that we have. In this case, we have webhooks and jobs, which we'll get into in a separate section. This actually helps us to glue sort of other systems to weights and biases and use the modeled registry as a central hub to manage the lifecycle of models. central hub to manage the lifecycle of models. Next thing we can do is we can click on one of the versions. I'm going to go ahead and click on this latest one. This is where we can see different tabs. This version tab, you've seen it before, gives you a high-level overview. Metadata is useful. Metadata, it promotes this information that has been logged at the run level and it promotes it upwards all the way to this level so that you don't have to go dig for it. It'll tell you all the summary metrics and other parameters that you might have logged it's really useful just to be able to see it quickly on this high level you can you can also search for things which is very useful uh this is one of my favorite tabs i use this all the time because i don't like to memorize code is that you could just copy and paste okay how to download this artifact which is really useful you can see the files these are different files um that belong to this specific artifact or this model and like these are all the like parts that you might need to load this model for example like commonly you might need a config, you know, plus model weights and maybe some other things. So actually this is like, you know, the assets that you need for that model. And I think like one of the most popular tabs is this lineage tab. This lineage tab actually shows you what processes created this artifact. Noah, do you want to talk a little bit about this lineage tab and what's going on here? Yeah. I think we wanted to spend a little bit more time on this tab specifically because it's one of the most useful one in our opinion, but I think to a lot of the customers we talk to. And it's, as we see up here, it's our lineage tab. If you remember when I spent some time earlier talking about why the model registry is so critical for teams who are kind of enterprise teams that are doing model management, lineage was one of the key features that I mentioned being critical to things like governance. So how, as an ML team, can you communicate to security stakeholders, compliance, auditors, that you know the exact history of essentially construct this graph that has as nodes runs and the different artifacts that are used as input and output. So you can see this is a really great example. It's pretty simple and output. So you can see this is a really great example. It's pretty simple and straightforward. We have a training run that's happening, and it's taking in this data set as input and might also be using some kind of automated job as well. And as output, there's a model. So if you remember back to earlier when I mentioned, you know, we're working with customers that have to report this information back. And some of the big questions tend to be around what I have this model in production, what is the data set that was used? So this is a very simplified example of how you can kind of trace back, you know, you have a model, what was the exact version of the data set? And you'll note note here that the dataset box actually represents a specific version. That way, it's not just largely what data was used, but it's the specific snapshot of data that went into train this model. That's great. I really like this lineage view because these are not just like, you know, this is not just like some kind of artwork that you can click on this and you can take you to run, which is super cool. You don't have to hunt around and figure out like, you know, it's actually really nice to be able to look at that. Noah, what's going on here with like these different types of styles and like what's when might you want to think about this slider thing? Yeah, great question. So the style is a way for you to configure kind of how much information you want summarized versus, you know, if you click the complete graph that might show it's a zoom out and you can see all of the essentially like peer runs so that's hidden in the detailed the detailed view but complete for example might be the the largest level you can zoom out and see really every single node that's happening in parallel and then simple kind of goes in the reverse direction. You might just see a summary of information and this kind of groups a bunch of the artifacts together. So depending on what questions you're trying to answer, you know, you can play around with the different configurations there. And typically I find that direct lineage is the most common. And folks will kind of like walk up and down that graph. And then one thing I wanted to mention as well as, you know, we, this, this graph can all be navigated programmatically as well. So if in your code, you're trying to find kind of the, the artifact that was consumed for, you know, by, by the run that you're in, then it's also possible to move up the graph. So kind of check out the ancestors of the artifact and then down as well, all of its children nodes. Generated artifacts. So you'll notice that there's some system level artifacts that we'll use to store information. So one example is, you know, like run tables. So when you're logging a table to weights and biases, we actually store that under the hood as an artifact. Some other examples might be a launch job. So those are kind of the configurableable reusable job templates that we'll spend some time getting into later. But that's another example of an auto-generated artifact. So these are system level artifacts that we use to store kind of other stuff that you're logging in weights and biases. And so it might be helpful to see those in the lineage graph, and it might also be kind of cluttering it so that way we give the option there. Great, thank you that's really helpful. And yeah that's that's it that's overview of the user interface for a model registry and how to navigate the most important features.
Navigating Model Registry
641
Weights & Biases
20240528
Now that we know how to link your model, lets dive into the Model Registry UI. If you would like to click around and check out how the model registry looks in use, look here: https://wandb.ai/reviewco/registry/model?utm_source=courses&utm_medium=courses&utm_campaign=emm
2024-06-10T11:58:50.182228
https://www.youtube.com/watch?v=DsSvVr7nT0w
Hey, so I'm back to talk through kind of some high-level concepts about this feature that we've referenced called automations. And as Hamel did a great job of describing, it's really this glue that lets you connect, you know, different actions that you're performing in weights and biases and hook them up with downstream events that you want to trigger automatically. And that's what this whole topic of automations is really about. That is about setting up these event-action pairs. So to clarify, an automation is just an event-action pair, where an event is something that takes place, like a specific change that takes place in the model registry, like a new model version is added, and that is hooked up to an action. So this is a downstream action that is triggered in response to the event happening. So it's nothing fancy. Again, it's this event action pair. And you can see here in the diagram that we have two types of events that you can set up. So either, you know, you want to add a new version. So the automation will listen to any changes in, you know, your registered model that we looked at earlier. And you can set one up that says, hey, I want you to kick off a model evaluation pipeline when any new model is added. So the event there is linking a new version to the registry. The second one we see here is adding a new alias. And what that means is you're adding a model version with a specific alias. So this might be something that you set up, for example, to kick off an action that should only happen when a model is in a specific stage of the model lifecycle. So quantized, adding the alias quantized might be a great event to use to trigger a downstream process of model quantization. Another one might be prod. So if you're adding the alias prod, that might kick off a downstream action that deploys the model to production. And on the right hand side, we see the action. So again, this is the downstream action that's going to be triggered in a response. The automation is going to listen for the event to happen, and then this downstream action will happen. In weights and biases, we currently have two options for these actions. The first one is a webhook. This might be a term that you guys are familiar with from software, and Hamill is going to spend a bunch of time kind of walking through that and some really good examples of how you set a web hook up locally, and then moving it on cloud, and kind of really to replicate what do we mean when weights and biases is going to kick off a web hook. But it allows you to kick off a downstream process in an external application. So maybe kicking off a model evaluation pipeline in your workflow orchestrator. It's essentially just a post request that allows you to hit an external endpoint in another application or server and lets you connect events to weights and biases to, you know, any other tooling you have. For example, tooling that might be managing your, you know, model CICD processes. The second type of action we see here is a launch job. So a launch job is a containerized and reusable and also configurable job that can be executed in various different compute environments. And we typically encourage this, you know, when you're working with any actions that, for example, like evaluation or model retraining, anything with heavy compute requirements. First the webhook is typically more helpful for handing off models to external systems like CI CD tools, like GitHub actions or any of the workflow orchestrators you might be using. And we'll spend some more time talking about webhooks. That will be kind of the main focus of some of the demos that Hema will be giving. But Launch is also, it's another product inside kind of the weights and biases family. Cool. So I put this diagram together. You know, I talked about kind of the anatomy of an automation, that it's this event-action pair. And you might be trying to understand, like, OK, like, why is this relevant to my CICD workflow? And really, like, how are automations useful to connect weights and biases to your CICD workflow? So I put down a few examples here. And you might, you know, on the left-hand side, the farthest column on the left might, you know, recognize a bunch of workflows that you're trying to automate. And, you know, the key thing to point out here is that you can use an automation to listen to changes in the model registry or to your artifacts to automate these processes. to your artifacts to automate these processes. So I think a great one that I have down here is model evaluation. So you're trying to make sure that any time a new model is a candidate for staging and it is linked to the model registry, it has to go through a system of evaluation tests. So the automation will listen to changes and say, oh, a new model was added. I'm going to go ahead and kick off this model evaluation process, potentially create an automated report, which pulls in all of the evaluation results so that they're ready to be kind of analyzed by the team. You can also apply automations and we'll spend less time on this, but for dataset artifacts, this is another great one where you set up an automation to listen to any new dataset versions that are added. And, you know, whenever a new version is added, you kick off this model retraining pipeline. This is a use case we see a lot with customers who are getting in a bunch of new data. So periodically, once a month, their data is ingested into Weights and Biases, and they want this automation to listen for a new data set so that the retraining process can be done automatically. Rather than someone saying, hey, we have a new batch of annotated data. Can you go in and retrain the model? And the last thing I'll note here, specifically for webhooks, that again, we'll spend some more time and demo this live, but you might be seeing, hey, talking to external systems, what about any authentication or secrets or access tokens that I might need? And is it safe to store those in weights and biases? So the answer is yes. You know, we have a secret store that safely allows you to store any of these secrets or access tokens that you're going to be required or you're the receiving server is going to be required to authenticate a webhook from weights and biases. And beyond kind of, you know of our cloud deployments for anyone and any customers that are on on-prem deployments, we offer integrations with secret managers for all three major cloud providers.
Introduction to Automations
439
Weights & Biases
20240528
Starting with Automations, we need to understand what an automation means in this context. In simple terms automation means setting up an event-action pair. An event is a specific change that takes place in the model registry (for example a new model is added to the model registry) and this triggers a response action. Weights & Biases supports 2 types of actions: launch jobs and webhooks. In the next lessons we will go deeper into each type. You can also connect your automations to your existing CICD pipelines. Further reading: Automations for Model Registry docs: https://docs.wandb.ai/guides/model_registry/automation?utm_source=courses&utm_medium=courses&utm_campaign=emm#create-a-webhook-automation If you are interested in learning more about CICD check out Hamel's previous CICD for ML (GitOps) course: https://www.wandb.courses/courses/ci-cd-for-machine-learning
2024-06-10T11:59:53.685351
https://www.youtube.com/watch?v=ZPMFeUURI4w
Hello. In this next section, we'll be talking about webhooks, but we're going to go over the fundamentals of webhooks and what they are and build one from scratch to give you intuition about how they work and what they are. So webhooks are very fundamental in software engineering. They're everywhere. If you google webhook, you'll see a lot of different references to webhooks. Particularly, you'll see webhooks used in many different dev tools. So you'll see things about GitHub, integrating webhooks on GitHub, Zapier, Twilio, Stripe, SendGrid, so on and so forth, webhooks are very common in a lot of developer tools and in a lot of infrastructure. In other words, webhooks are really common mechanisms that different infrastructure providers use to communicate with each other. And it's like a flexible way for you to package a communication between one tool, like weights and biases, and another tool. So what is a webhook? Well, if you Google it and try to read about it, it can be kind of confusing. So for example, the Wikipedia page webhooks defines webhook as a web development. It's a method of web development, basically where you have a web application with custom callbacks. it's a method of web development basically where you have a web application with custom callbacks if you don't know what a callback is that's fine a callback is just a piece of a function or some code that is going to be executed at a later time now even that i think is a little bit more complicated all this terminology is more complicated than it needs to be for you to understand what a web hook is and i think the for you to understand what a webhook is. And I think the best way to understand what a webhook is, is to basically build one. And we can do that with Python. And so that's what we're going to do right now, is let me open some code. So there's two components to a webhook. One is a web server that will receive the webhook and then the client that's going to send the webhook and so i have the code in this repo called 1db modal webhook and let me go ahead and open uh the the code that i want to go over. So the first thing we want to make is the webhook server. And we're going to use FastAPI for this. FastAPI is a very popular Python library for creating web servers. And basically what this is, is we do the usual FastAPI things, things is we set up the the server and this class is actually the data this is a pydantic model is basically like a data class if you have if you're not familiar with pydantic it's it's it's like a data class that just specifies what kind of information though the web server is going to receive from the client so it's going to receive some kind of package that has all of these fields. An event type, an event author, an alias, artifact version, so on and so forth. And I made this particular kind of data for a very specific reason. This is actually a mock-up of what weights and biases sends to a web server. We don't have to get into that now. It doesn't really matter what this is, to be honest with you. This is just the data that this particular web server expects, and it's going to validate that in this schema. This underscore, this dunder str, this is just customizing exactly how this information is printed out so if you don't know about dunder str this is just a special method you can override in python that lets you control how something is printed out and in this case we're just printing out all of these different variables in a very particular way. Finally, we have the web server. The web server has different endpoints. So this is the root endpoint here. And this root endpoint is going to get two different, is going to get the event, which is this kind of this JSON that has this schema. And then it's going to get a token. The token is going to be pulled from the header. And I'll show you that in a moment. The token is going to check that the token is the secret random token. This is not secure, by the way. This is just me showing you a very basic workflow, a very basic webhook. And then this is just a check to say, hey, if the token isn't correct, just raise an exception. And this is just, you don't have to, you know, this is just a little bit of a verbose exception, but just want to set the stage for how you would do it properly. of a verbose exception but just want to set the stage for how you would do it properly. And if the token is correct, the web server is going to return this message. That's all it's going to do. And so what do we mean by callback? So callback is just any function that you would execute when receiving this event. In this case, we're just returning the string, but you can imagine there could be another function like, let's say, pull model or something like that, and then you would execute that here. That would be an example of a callback where this web server receives some information. Upon validating the information, it decides to execute some code. We don't have to get into that because I don't want to confuse you. I want to just show you the very basics of a webhook. Like, what does it mean? What does it mean? And so, let's start this web application so I'm gonna go ahead and open the terminal and I'm gonna zoom in so you can see my screen this is a little bit of a split split screen so hopefully you can see it let's just zoom in all the way make it really big let's make it really really big big. Let's make it really, really big here. Okay. Let's make this side really, really big as well. Okay. So the first thing we want to do is start that web server. So I'm going to type in this command. Let me just get back to it. Oops. There you go. So UVicorn is just the thing that runs the web server. And the name of this file is fastapiserver.py. So this is fastapiserver.py. And then this is the app. The name of the application is app. That's why you have UVvicorn fast API server app. This starts the web server. And you can see on this side, this is kind of serving, this is the logs of the web server. And what we want to do is we want to send a payload to the web server, like some kind of request. And so this file curl.sh has a minimal request basically what we can do is say we have this curl command let me just zoom in a little bit more so this curl command again this is in curl.sh has the following headers headers has the authorization header. You have to make sure that you put this bear word in here, otherwise it's not going to work. And then you want to have the content type is JSON, and we're going to send exactly that schema that we saw here in the base model. We have the event type, the event author the event author the alias artifact version and so on and so forth you can see that this is that's exactly the information that we're sending it that's that's what it expects and here is where it's going to check that this token is going to match the secret random token is actually is going to match. The Seeker Random Token is actually going to match. That's what it's checking. And then you don't have to mind this stuff up here. This is just saying use localhost unless I say otherwise. And so if we go back here, we can just run that shell script. And when I run that shell script, you see message, the event is processed successfully, and you see these logs printed to these logs have been printed. And these logs being printed in this way is just the result of this specific Dunder STR definition because this gets printed. See where it says print event right here on this side? This print event is actually what's responsible for the event being printed like this. And we'll come back to this. This is how when we deploy this web server onto a more production-grade application, you're gonna be looking for logs like this. And this is actually, this information is kind of the same things, the same type of information that weights and biases will send you. It doesn't make sense now, like what this information means, like what is an event type, what's an alias, What does that mean? That will make sense. I just wanted to ground you on like what a web server is. And what you should do is like play with this. You should say, okay, let me kind of, what happens if I mess this up? Like what if I take this out and then, you know, try to send a request? What happens? And you get an error. See, like this unprocessable entity, and it says, hey, does he have a field required that you didn't send? Something is wrong. And we can put that back and try again. And then you see now it's successful, and now we have those logs printed again so this is key this is the basic kind of idea of what web hooks are just one application so on this side this is just me using curl sending some data to another application which in this case was this fast API server. And what's going on is in production, what you'll do is instead of using curl, we're going to use weights and biases to send this information to some application. That could be GitHub. That could be another machine learning infrastructure application, we're actually going to use something called modal. Just because modal is very lightweight and is accessible to everybody, just to show you how you can wire up weights and biases to modal and kind of understand what is happening. But this like a minimal example is really helpful for you to sort of understand what's happening when I show you webhooks and weights and biases. Thank you.
Webhooks
694
Weights & Biases
20240528
Webhooks are very common in a lot of developer tools and infrastructures. Webhooks are common mechanisms that different infrastructure providers use to communicate with each other. It is a very flexible way for you to package information between one tool, like Weights & Biases, to another tool. Resources: -W&B Modal Web Hooks repo - code for this lesson! https://github.com/hamelsmu/wandb-modal-webhook/tree/62c91b055a343f802410ee328e6d70ec602c7ee -FastAPI documentation: https://fastapi.tiangolo.com/ -If you want to learn more about Pydantic, check out our newest LLM Engineering course on Structured Outputs: https://www.wandb.courses/courses/steering-language-models
2024-06-10T12:01:11.754544
https://www.youtube.com/watch?v=syZGoQfSqqU
In the last section, you created a webhook locally. By now, you should understand the basics of webhooks. That is just a web server that is receiving a request that then you can, in response to, execute functions or do anything else in response to. The next step towards making this a little bit more realistic and more production-grade is to actually host the exact same webhook server on the cloud. And the easiest way to do this, in my opinion, would be to use something like a modal. Now modal is a kind of serverless Python environment that is geared towards machine learning engineers. And basically, it's like Cloud Functions, which is a Google Cloud product. But really, it's designed for Python programmers, especially machine learning engineers, who want to deploy code in a serverless way. And I won't go over modal in a lot of depth. All I will say is it's very lightweight, it's really easy to get started, and everybody has access to it. You can sign up and your first $30 of credits are free. So it should be fairly straightforward to get started. So the first thing we're going to do is we're going to take the code. Well, actually, let's look at the GitHub repo. So if you haven't cloned this repo, please go ahead and do it. It's under my GitHub username, HamilSMU. And this is 1DB modal webhook. Go ahead and clone the repo. And in this repo, you'll have all the code that I will use in this particular lesson. Now, I'm going to go ahead and open the readme in this VS Code editor so we can both see it in bigger font. So this repo is about the integration between an example integration that you can achieve with webhooks with weights and biases. And specifically, weights and biases with modal. But because it's webhooks, it's really a minimal example of gluing weights and biases to anything else. Because webhooks are very general. and biases to anything else because webhooks are very general. As we discussed before, webhooks are used by lots of different infrastructure as a way to expose a communication channel with the outside world. And this is why weights and biases has a webhook trigger. But before we get into weights and biases, we want to actually host this webhook server somewhere that weights and biases can access. And so to do that, we're going to have to put it on the cloud. So the first thing you're going to want to do is pip install modal. And I would create a virtual environment first you know using pip pipenv or my favorite is to use conda so pip install modal and then also install them all the modal client when you log into modal you will once you create an account, the next thing you're going to want to do is to create a secret. So if you recall, we had the secret random token before. So let's go ahead and click on this create secret. It'll take you to the secrets area. And when you click create new secret. And what we're going to do is we're going to go down to custom. And you can see here that you might be wondering there's a weights and biases secret here. We're not going to discuss that right now. We're going to go and create a custom secret because that's like very flexible. Okay, I'm going to take the auth token, which is here, paste it there, and then the secret random token here. Then I'm going to say next. I'm going to say the name of my secret is my random secret so just make sure you get that right for this code to work it's easy to make a mistake there it'll say secret is created um you can say like it shows you a minimal example of how to retrieve the secret. If you end up, we will look at the code soon, but you will see this inner code, this My Random Secret, and it will be put in the environment. So let's go ahead and open, this is the web server that we created in the last exercise. So we're going to have to refactor this every so slightly to be able to put it on modal. Now I'm not going to go deep into modal, but what I'll do is I'll show you a diff between the fast API server that we created before and in this. So this is just a diff. So this is just a diff. And basically what you see here is the things that are changed is the most important ones is we're importing modal, some modal utilities here like the secret, something for an image so we can build a Docker image. We are creating a Docker image that has FastAPI in it in Pydantic as well and we're using something called a stub that's just to kind of name the application and instead of this this like FastAPI decorator we're going to be using these decorators from modal. They kind of do the same thing. Basically, we have a special decorator that allows us to get the secret and this web endpoint. The rest of the code is exactly the same. Now, you don't have to get too hung up on what this code does. A lot of it you can think of it as boilerplate. Just know that this is some minimal change that allows this code to be deployed to the cloud. And instead of hard coding the environment variable here, which you know you wouldn't want to do in real life, this is actually going to be pulled from the environment, specifically this authTokenEnvironment variable, which we set in modal. So going back to, let's go back to the README, and let's preview the README so we can actually see it. Let me put it back here. There we go. So the next thing we want to do is deploy this modal webhook. So let's go ahead and do that. We're going to go here and I'll just go modal deploy. And then it will be server.py. and then it will be server.py. Okay, that was fast. Basically, because I've deployed this before, modal actually does cache things intelligently. So this has been deployed. You'll get two links. This link is actually the endpoint of your webhook. Instead of localhost, you can send payloads there. And then this is actually where you can look at the logs, so where you view the deployment. Let's click on that. You can see this has been deployed. And then we can go to logs to see logs. Now let's click into here. It's listening for logs and looks like there's no logs currently. So next thing we want to do is send a payload to that web endpoint. So what we can do is we have this URL and we can use that same curl script. Let me just remind you what's in that curl script. The curl script is just, you know, just sending a curl command to a URLl the same command we sent the local host in but instead of localhost instead of localhost we're gonna send it to that endpoint so to do that we'll just do curl sh whoops curl that sh and this endpoint sorry this, this is the endpoint right here, the one that ends in run. Don't get them confused. All right, it's going to send that payload. Aha, message event processed successfully. We got that back, and we go back in, and you're going to see the logs. You can see on modal, the exact same logs are printed out. So this is exactly the same thing that you did locally, but this is now running in the cloud. And now you have a webhook server running in the cloud. And hopefully because you've done this locally, you should have a good mental model of how this is working and what it's doing. you should have a good mental model of how this is working and what it's doing. The next thing we're going to do is we are going to create an automation that triggers weights and biases to send the webhook payload instead of you sending the payload from your computer. And that's what I'll be going over next.
Hosting a Webhook server
602
Weights & Biases
20240528
Lets look into how you can deploy your own webhook server. Resources: W&B Modal Web Hooks repo - code for this lesson!: https://github.com/hamelsmu/wandb-modal-webhook/tree/62c91b055a343f802410ee328e6d70ec602c7eeb Modal documentation: https://modal.com/docs/examples
2024-06-10T12:02:15.595450
https://www.youtube.com/watch?v=ymGncYhU-JU
The next thing we're going to do is set up webhooks and weights and biases. Now, just to remind you, there's this readme with the GitHub project that I shared, the 1DB modal webhook. And the readme describes all of the steps I'm going through right now. So if you get lost at any point or want to refer to a written version of this tutorial, you can reference the readme. So to set up webhooks, you want to go to your team's page. So the name of my team is reviewco and you're going to want to set up a couple of things. So first you're going to want to send up a secret. I already have the secret here but basically what you do is you click on this. It'll ask you to create a new secret. The name of the secret is auth token. And then the secret itself is described here as secret random token. So you would just type that in. You would type in secret random token so you would just type that in you would type in secret random token you can reveal the secret to make sure that you spelled it right look i even spelled it wrong so it's a good idea to check but and then you can add the secret i've already added it we can see that here um and you know i can just make sure again by going back to the readme and just copy-pasting it just because I'm paranoid. So I'm just going to go back and just going to update that. And then what we want to do is create a webhook. Now, I've already created the webhook, but I'm going to go ahead and delete it and recreate it, so you can see the whole process from scratch. What you're going to do is you want to create a new webhook. Now, you will probably not have any webhooks. You're going to hit new webhook, and you can name it whatever you want. I'm going to call it modal webhook test. And then the URL is you want to enter the URL endpoint for the webhook. Now, if you recall, going back to my terminal, when you launched the modal app, you got these two different URLs. The one that ends in.run is the endpoint. So you want to copy that and paste it here. Now, you can also get this URL from your modal dashboard. You can go in here, you can go into apps and you can see the endpoint right here if you want. Going back to weights and biases, you got the URL here. Now what we want to do is you have access token and a secret. This is a little bit tricky. I get mixed up on this too. And for this exercise, just go down here. It'll tell you what to do. You're going to want to set the access token. And you're going to want to select auth token. So that's what we'll do we'll go back and we'll select we'll make this auth token and then we'll say great it will say create webhook and now we have created this webhook and now we are ready to wire up the automation
Webhooks in Weights & Biases
203
Weights & Biases
20240528
Now that we have a server set up, lets hook it up to our Weights & Biases platform. Resources: W&B Modal Web Hooks repo - code for this lesson: https://github.com/hamelsmu/wandb-modal-webhook/tree/62c91b055a343f802410ee328e6d70ec602c W&B configuring a webhook docs: https://docs.wandb.ai/guides/model_registry/model-management-concepts?utm_source=courses&utm_medium=courses&utm_campaign=emm#registered-model
2024-06-10T12:03:06.173150
https://www.youtube.com/watch?v=kEMQhhSPghs
Hello, I want to go over an important subject about webhooks and it is how to test webhooks. So one way to test webhooks is to kind of like trigger webhooks by adding aliases or whatever, but that's kind of cumbersome to just like add an alias just so you can trigger a webhook. And actually, weights and biases offers a way to test webhooks, like a specific testing functionality. So let me show you that right now. So if you recall, the webhook code looks something like this. It's a web server, and it's using FastAPI. And if you're familiar with FastAPI, we're defining a data model that expects the payload to be a certain schema. So you're expecting a payload to have all of these fields, event type, event author, alias, artifact version, so on and so forth. And if you send a payload that doesn't conform to the schema, then FastAPI is going to throw you an error. Specifically, PydI was going to throw you an error. Specifically, Pydantic is going to throw you an error. And this is a very common pattern. And so one way to do that is one way to debug, again, is to like trigger it manually and see it on the server side. But that, again, is cumbersome. So I want to show you a neat trick. again is cumbersome so I want to show you a neat trick and the neat trick is if you go into your teams page so this is the review code and you go down to your webhooks if you click this triple dot thing you'll see this handy test in this test what it will do is it will give you sort of this playground where you can have this payload. And so there's different ways you can you can like start from. So I have this payload already from this is my the model registry that I showed you earlier. And I'm going back to this one that I already set up. And if you recall, if I view the details, I have this payload. Now, I want to share, like, when I was first creating this webhook, I kind of had to, like, think about what was being sent. And so what happens is, like, you might want to test this payload so this is the same payload and so there's there's different ways that you can try to figure out what's going on if you have an error one is to look at the logs so in modal case you can look at the logs I can see a request I can see okay there's some kind of logs now depending on your application you might not even have access to the logs like in in this case um you know these logs are not really showing me maybe what i'm looking for and i have to dig through it so it's actually helpful to see the response um you know especially if you're not using curl you want to see what the response is. And so basically, what I want to do is let's say I have this payload. And let's say I forgot to include the author, for example. And that's really easy to do is to make a silly mistake like this. It happens to everybody. And again, if I don't have the author, it's going to error because it's not having this like field and so let's test that let's test that against this modal endpoint i'm gonna go back to test webhook here and what it's going to do is um ways and biases is going to show you the response from the web server in this case what modal is is saying in this case this is a little bit what modal is saying. In this case, this is a little bit confusing because this is actually pydantic. This is like a low-level pydantic error kind of response, and it's showing you that it's missing a field. What this, like how you translate this is like you're missing this event author field. That's what this means. And this is pydantic specific. This is not anything to do with weights and biases. This is like what your web server is responding with. And it's actually really useful to see this. So if I add this back, for example, I'm going to add that field back and test the webhook. We will see that now there's a success. That there's a response and there's no error. And, you know, again, we can see that if I look at the logs, we can see that, hey, there was a success here. And then that was preceded by some errors. And, you know, in this case, actually, like, I can't really, I don't really know what those errors are, to be honest, like, I can't even see them. And this is like a real, this is kind of like a real case where having the response. So there's no error being logged on the server side. I could improve my code to log errors on the server side, but I don't do that. Like if there's an error, it just errors. It doesn't print anything. So having the response from the server actually is really useful. And it's helped me catch errors. And this has actually helped me in real life and like be able to rapidly test. And basically another way to use this is when you're actually developing your web server. Like let's say I want to continuously change this code and change the schema and change like you know the code of the web server well while you're developing it you can just like basically have two windows open where this code and sort of this testing window and keep sending payloads and sort of keep debugging and that's also very helpful so i just want to show you that this is actually like one of the most helpful things, like features to know about in webhooks is testing because it helps you develop webhooks.
Testing Webhooks
352
Weights & Biases
20240528
Testing is an integral part of any development process. To test and debug webhooks you can go the manual way by testing all the different triggers and checking the results or you can use the Weights & Biases automatic testing capabilities! Let me show you how to do it efficiently!
2024-06-10T12:03:56.205247
https://www.youtube.com/watch?v=1rTtEBPoU4k
In this video, I want to go over some exercises you can do to further your intuition about what you can do with webhook automation. So if you recall, this is the code that runs that modal web server that receives the webhook. And in this web server, we're not really doing much. We're just printing the event. And we're just returning this message to the client so nothing really is happening here and and what we want to do is we want to do more than just print the event we want to get this information out of the event and do something so here are some ideas one is download the model use the weights and biases python client to download the model. Use the Weights and Biases Python client to download the model into the web server. You don't have to do anything else. Just download the model. Make sure, familiarize yourself with the Weights and Biases Python client, which we've already went over in the CICD for ML course, to retrieve the model. And do something with that model. Very simple thing you can do is print the number of parameters. But also more realistically, score the model against the test set. Then finally, do something fun with the webhook. For example, send a Slack message, email, or text. Make the webhook do something fun and let us know what you do with it and share your code. It will make it a lot more fun for the participants of this class if you share creative things that you end up doing with your webhook.
Webhook Exercises
95
Weights & Biases
20240528
It is time for you to get your hands dirty, there is no better way to learn than to try it. Here are some exercises I recommend to start with but you can get as creative as you want.
2024-06-10T12:04:32.278436
https://www.youtube.com/watch?v=CXxjREonU9Y
Hello, welcome back. So far in this course, you've learned about model registry, specific types of automation, including webhooks and waits and buys as launch. And now we want to help you apply these tools to a real project. And for this, my colleague, Doric is going to be walking you through a case study. And Doric is actually a really impressive person. So first of all, Dharik is a Kaggle grandmaster. He is really good at machine learning, but also he has many years of experience in industry, applying machine learning in production. And also he's one of my favorite teachers. He's taught courses with me in the past, especially courses on weights and biases. And so I really think you're going to enjoy this lesson. Thanks a lot, Hamer. I really appreciate this introduction. So again, I'm a machine learning engineer at Weights and Biases. One thing that we do at Weights and Biases is we work a lot with our customers. a lot of customers are now very excited about training and fine tuning llms in fact we have a separate course about training and fine tuning llms that we will link below this video and that we definitely recommend everyone taking but we also want to take this use case of fine-tuning a large language model into this course and show how to set up model management processes that will support both training and evaluation of LLM models. And since this is a case study, we will take an existing dataset called Alpaca. It comes from Stanford. called Alpaca. It comes from Stanford. It's a synthetic data set that was created with an LLM, Text DaVinci 003, and it contains several thousand of, in fact, around 50,000 of instruction following examples. And in the original Alpaca paper, the team has fine-tuned a 7 billion LLMA model on this dataset. In our case, since we are trying to be more efficient here, and we want to also make it easier for people following the course, we'll be fine-tuning a smaller, a tiny LLMA model that has around 1 billion parameters. The important thing when fine-tuning LLMs is evaluation. And in this case study, we'll build on the concept of using LLM as a judge. So we will take the generations from the model that we fine-tune, we'll compare them to generations from a model that we defined as our baseline, and we will use another LLM, in this case a more powerful LLM, which is GPT-4, to compare which one is better. And this will result in a metric that we will use to make decisions if the new model that we train is better than the baseline, and if we want to proceed further with quantizing it, preparing it for deployment, and ultimately deploying it into production. And with this, deploying it into production. And with this, I want to now transition into the live demo session and I want to encourage you to work along with us. Check out the repo that we will provide for the exercises. Follow along and practice setting up this model management processes together with us.
LLM case study overview
202
Weights & Biases
20240528
We welcome another Weights & Biases guest instructor, Darek Kleczek! Darek is a Machine Learning Engineer at Weights & Biases and a Kaggle Competitions Grandmaster. Darek will introduce a case study which will allow us to experience model management and automations while finetuning and evaluating a Large Language Model. Resources: Training and Finetuning LLMs course with Jonathan Frankle (MosaicML), Weiwei Yang (Microsoft Research), Mark Saroufim (Meta) and Darek Kleczek (Weights & Biases): https://www.wandb.courses/courses/training-fine-tuning-LLMs Alpaca: A Strong, Replicable Instruction-Following Model paper: https://crfm.stanford.edu/2023/03/13/alpaca.html Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena paper: https://arxiv.org/abs/2306.05685
2024-06-10T12:05:20.019533
https://www.youtube.com/watch?v=iWCAHdsPxMQ
This is a course about model management, but to manage models, we first need to train some models. And this is what we're going to do in this lesson. This is the train.py script that you can access in our course repo. And we will share the link to the course repo below this video. And let's take a look what's happening here. In this training, we will be using a dataset called Alpaca. And this is a dataset for fine-tuning language models on instruction following. So let's take a look at this dataset. I like to see the data before I train my models. Let's take a look at this dataset. And you've already seen this artifact view. We are using artifacts in weights and biases to manage and version our datasets and models. In this case, this is a dataset. Let's take a look at the files here. And we can see there are two JSON files for the eval and the training data. But it's difficult to visualize and to navigate JSON to explore the data. So I will take a look at the lineage. You've already seen the lineage view. You can see our dataset and specifically this version, version 4 of the dataset. It was used in a bunch of training runs. But I want to look at the run that was used to produce this dataset. And my hope is that the person that logged this data can take a look who... Okay, so this was Thomas. Thomas logged this dataset. Let's take a look if he visualized the data as he was logging it. And as expected, you can see a table, a Waits and Bases table with a trade dataset. And we can use this table to inspect this data a little bit and get a feel for what we are actually using. As you can see, there are three columns in this dataset. There is an instruction. There is an input column, which sometimes it's empty, sometimes it contains some text, and there is the output. So if we look at one of these instructions, develop a script that prints out the Fibonacci sequence, there is no input here, and then the output is... let's take a look at this the output is actually the python script that prints out a fibonacci sequence so you can look at this in markdown hopefully it will render better yeah so it seems like i will not analyze this i would expect this to be to be correct let's take a look maybe at one of the examples that contains an input so the instruction here is categorize the following post as either a personal or professional post. And the input is just had the best dinner with my team from work. And the output is it is a professional post as it references a work team. Not sure if I agree with this, but this is the dataset that we'll work with. And it's good to have a good feeling for what this dataset contains. We can take a look at eval and it's structured in a similar way. That is, for example, translate this text into Spanish. The text, the input is, we are excited to work with you on this project. And the output is the Spanish translation of this text. on this project and the output is the spanish translation of this text all right so let's go back to the training script and i want to also give you a hint on how this data is being transformed as an input to the model and this is the the data.py file that is in the mini llm library within the the course repo and you can see that we are transforming the inputs, the rows, depending if there is no input, then we're putting this prefix. Below is an instruction that describes a task, write a response that appropriately completes the request, and we're feeding both the instruction, we're feeding actually the instruction here. If there is an input, then we're formatting this the instruction here. If there is an input then we're formatting this a bit differently, we're also adding this input into the prompt and then in the final step we're combining both the instruction and the output which is the the completion from the LLM and that's something that we will use to train our LLM. This script starts with a config that we can modify as we launch this script. We can adjust the learning rate. We can adjust, we can train this model by freezing some layers in order to fit it on a single GPU. We can potentially use LoRa as an alternative way of fitting this model on a single GPU. We can potentially use LoRa as an alternative way of fitting this model on a single GPU. We can maybe like limit the number of steps if we just want to debug it. We can adjust the number of evaluation samples. This script is using HuggingFaceTrailer. And because it's using HuggingFaceTrailer, like we need to add this line, report to 1db. And this will make sure like we can see how our training is progressing in weights and biases. And yeah, we will not go in detail through the script. If you are interested in learning more about training and fine-tuning LLMs, we have a fantastic course about training and fine-tuning LLMs that you can access under 1DB.courses. And I fully recommend that. So in this case, we'll just assume that, you know, we want to train some models, we will not go in detail through like every element of the script. And just one final thing I want to highlight is we're adding this sampling callback for evaluation, because normally as we train a model, we get access to the loss. You can see the training loss and the evaluation loss, but very often evaluation loss is not sufficient to assess the quality of a language model. And for that reason at a certain point we may want to sample from the model. So we want to generate outputs for a given input and either visualize it or automatically evaluate this output. And this is what we are going to do here to control the quality of our model. So we're adding this sampling callback that at the end of the training, we'll use the model checkpoint that we saved and generate some outputs for our evaluation dataset. Okay, so let's start this training script now. I'm gonna go into my console and do train.py and maybe let's limit this now to, let's say, 20 steps just for debugging purposes and see if the script is running. You can see the script is calling weights and biases, it's starting a run in my review entity. One tip is, and yeah, the training is starting and it will finish quickly because this is just 20 steps, debugging steps. One thing I want to highlight is as you are running the script, please adjust the project name and specifically the weights and biases entity. You will not have access to the review call entity, so you might want to change it either to your personal entity or to the team that you're using weights and biases in. You can adjust also the weights and biases tags, you can change the input model. We are starting here off with 1.1 billion parameter tiny lava, that is our pre-trained backbone. And these are the things that you can adjust as you're playing and trying to get a better and better instruction following model. Okay, so the script finished training, and it's now generating a data set that's generating an evaluation set of data. So let's give it a while to finish, and in the next video, we'll pick it up, and we'll try to see how we might want to evaluate this model.
Finetuning an LLM and saving model
463
Weights & Biases
20240528
Before we can manage our models, we need to train some! In this session we will use the following parts of the course repo: Train.py code - follow Darek and train your own model: https://github.com/wandb/edu/blob/main/model-management/train.py Data.py code: https://github.com/wandb/edu/blob/main/model-management/mini_llm/data.py Further resources: Training and Finetuning LLMs course: https://www.wandb.courses/courses/training-fine-tuning-LLMs
2024-06-10T12:06:16.372743
https://www.youtube.com/watch?v=WbbD96-0Nhc
The model that we just trained is still being evaluated. It's generating samples for our evaluation dataset. This might be a good time to take a look at our evaluation code. In this case study, we will use the concept of LLM as a judge. That means that we will provide an advanced LLM like GPT-4, the instructions that we need to respond to. And we will provide two responses. One will be from our baseline model. This is what we are evaluating against and we're trying to improve on top of the baseline. And we will provide the instructions from a candidate model. And because we are using model registry, we'll pull both the baseline and the candidate from model registry based on aliases. So let's take a look at this code. As you can see we are starting with a configuration object and we're providing the model that we will use as a judge. So in this case, it's GPT-4, specifically version 0.6.13. We're providing it with a system prompt. And the system prompt says you will be presented with a choice of model A and model B response for an instruction. The response should follow the instructions and use the provided context, if there is some. And we're using the chain of thought prompting. So we're asking the model to think step-by-step, which response is better, provide a reason why, and then asking it to answer with a choice if model A or B or equivalent is better. We are providing it with a baseline in the candidate model names that we will use to parse our our data we can limit evaluation to certain number of samples we're again providing our weights and buses project entity and again for your use case you may want to change that entity to your either personal entity or the team that you're working in and and we're also providing it specifically the aliases of our of the model we are evaluating, so in this case it's our candidate model, and the baseline. And this is something we need to make sure exists in the model registry. So we have models that are assigned both baseline alias and the candidate alias. We will be using Instructor to get structured outputs from our LLM, from our evaluation LLM. And I want to highlight here a great course that we have on getting structured outputs from LLMs that you can also access on the Weights and Bases courses. And there you can go deeper into how Instructor uses Pydantic to get the structured outputs. In this case, we're collecting two fields from our LLM. One is the chain of thought, which is actually the reason why the model selected model A or model B, and then the choice, which is either A, B, or equivalent. Then we have the evaluator class, which takes in the config, it patches the OpenAI client with instructor so that we can get the structured outputs out of the client. And then we're providing it the instructions and the responses from both models, the candidate and baseline. When using LLM as a judge, we need to be careful about some of the biases that LLMs has. And one of the biases is position bias. So sometimes whether a response comes first or second, that might influence the choices of an LLM. And for that reason, we are shuffling the answers randomly. So with enough samples, we will be able to avoid that position bias. So shuffling the answers, we're giving the LLM the system prompt that we already reviewed and the input, which is the instruction, the generation from model A, generation from model B, and we're asking it to respond with a choice. And then there are different metrics that you might use with LLM as a judge when comparing two models. One classic metric is win rate. But I find win rates sometimes difficult to interpret, especially if you account for ties between models. You need to do certain estimation on how that win rate should be calculated. And I like simple metrics. So the simple metric that we will use is whenever there is a tie, we will return a zero score. Whenever the baseline is chosen, the score will be minus one. This is the score for the baseline. It's minus one. Whenever the candidate is chosen as a better response, we'll get a score one. Now, if we average all of the scores across our evaluation data set, we'll get a metric which we will call candidate preference score. And if that metric is positive, then that means that the candidate is better than the baseline. If that metric is negative, that means that the baseline is better. And that metric goes between minus one and one. In the case of minus one, all of the responses in our evaluation set were better for the baseline. In the case of one, all of the responses of the candidate were better. So it's a pretty good metric that is very easy to interpret. The higher the score, the better the candidate. So you can make certain, you can put certain rules. For example, if the metric is above 0.5, that means like we're always switching the candidate for the baseline. If it's around zero, like it means that maybe it's borderline, maybe you need to do some more, some more fine tuning or get a better candidate. And if it's negative, that means strictly that the baseline is better. So that will be the metric that we'll use and we'll send off all of the responses to the LLM via the API client. We'll gather the responses and calculate our candidate preference score and log it into weights and biases. One thing I want to do here and highlight that I like to do personally, I think it's a good practice, is as you store the evaluation results in weights and biases run, I'd like to add lineage specifically to both the candidates and the baseline model and update this in the config. So you will see that I'm using our artifacts API, calling 1db.useArtifact with the evaluation baseline from the model registry. And then I'm taking the path of that model and saving it in my config as evaluation model path. And I'm doing the same with the baseline. And that will make sure that whenever I look at the results from this evaluation, I understand exactly which model was evaluated against which baseline. So I'm downloading the table from each of these models, the evaluation, the baseline, merging the table, passing this to our evaluator that we reviewed, then saving the results into a results data frame, calculating our candidate preference score, and again positive score will mean that our candidate is better, and then also saving the results into a weights and biases table that we can inspect, logging it into a weights and biases both as a table and also storing a CSV file that we can use locally if needed. and also storing a CSV file that we can use locally if needed. So our evaluation run is almost finished. So in the next video, we'll take this evaluation script and we'll try to set this up for running in an automated way based off model registry triggers. See you there.
Setting up LLM evaluation
463
Weights & Biases
20240528
After training a model, we need to understand how good it is. We will use LLM as a judge method for the evaluation of our models. In this session we will use the following parts of the course repo: https://github.com/wandb/edu/tree/main/model-management eval.py code: https://github.com/wandb/edu/blob/main/model-management/eval.py Further resources: -LLM Engineering: Structured Outputs course with Jason Liu: https://www.wandb.courses/courses/steering-language-models
2024-06-10T12:07:16.337674
https://www.youtube.com/watch?v=AdBJ0Sk5rSk
Okay, after fixing the max steps parameter, our training run has now finished. And we can see the training loss has converged over a little bit more than a thousand steps. And we can also see the generations from our model that we logged in evaluation. We can see both the prompt and the generation. But it would be good now to compare this versus the baseline and see how this looks so now instead of running evaluation manually i will go to artifacts i will go to the model that we saved in this training run and i will link it to registry and remember we're working with a small instructor lm and now i will specifically add the candidate alias to this model version and after linking it we should be able to see um to see this model in the registry and hopefully the evaluation action has triggered. So let's take a look at the queue. And you can see the status for a new run is queued. It's waiting for an agent, and it's running now, so the agent has picked up this specific evaluation run. And we should be able to go there. And we can double check the config. So in the config, we can see that the candidate model is now small instruct lm v3. And it's evaluated against our baseline which is the version zero and we can see when this run finishes we should be able to see the the results it's uh still waiting for the i think the run is still is still going so let's wait for a little bit and then we'll see the results of this run okay we can see now the candidate preference score is 0.53 so now this this run is a lot better than than our baseline so this is a candidate that based on this based on this score we can do something more with it maybe we can move it to the next stage maybe we can move it to production maybe we can move it to the the stage of quantization we should be able to see the comparison between the baseline and the candidate. And we can, for example, filter by the choice and, sorry, group by the choice and see that there's just 12 examples where the baseline is better than our model. There are 23 cases when both are considered equal by our LLM judge, and in 65 cases the candidate is better. So it's clearly a better model than the baseline. If you want to do a deep dive and understand maybe where there might be regressions or where the candidate might be worse than the baseline, we can also use this table, maybe if I remove this, I remove this grouping, and just maybe I want to focus on the error analysis. And if I sort this in a distantly way, then these are all of the examples where the candidate is worse than the baseline. We can take a look maybe at some of the reasoning here from our llm judge um and use this as an insight to improving our model in the next run maybe we need to give it some more data maybe we need to change the hyper parameters and this will help us improve our model over time so hopefully you've seen the value of setting up all of this this automations and evaluation in an automated way based on actions from the model registry. And yeah, we're looking forward to having you practice this on your projects and in your organizations.
LLM Evaluation results
255
Weights & Biases
20240528
In this lesson we will take a look at the results from our automated evaluation runs and make conclusions about our candidate model.
2024-06-10T12:07:57.952398
https://www.youtube.com/watch?v=qfWxLhXdPiM
Hello, welcome back. The next thing I want to talk to you about is automation design patterns. So what I mean by this is when to use webhooks and when to use launch, because those are two types of automation that you have learned about, and you might be wondering when should you use one over the other. So I'm going to talk a little bit about that. So in this slide, I'm going to do kind of a comparison of webhooks and launch, just to give you some background and some grounding. So just to review, webhooks and launch can both be triggered by an event, and that event can be some kind of action in the model registry, such as adding a tag or adding an artifact, so on and so forth. Now, if you recall, in webhooks, when you have an event, what happens is it sends a payload. And a payload is just kind of a package of information. You can think of it as a JSON that has information. And the information are things related to weights and biases, metadata, and the artifact in the registry. And your web server receives this webhook. And we can call this web server a consumer. Now, it's not always strictly a web server receives this web hook and we can call this web server a consumer now it's not always strictly a web server that's why I called it consumer because web hooks are very common when we talk about developer tools so one there's many different tools that can receive a web hook doesn't have to strictly be a web server and one example of that is GitHub Actions. You can have weights and biases send events to GitHub Actions. Now, with webhooks, you are responsible for handling the queue. And what is a queue? Queue is just kind of a holding area where you have all of your jobs that you want to execute. And so with webhooks, weights and biases fires the webhooks, you receive the payload, and then if you're receiving many different payloads, it's up to you to implement how to process those payloads, you know, in what order, and keeping track of those payloads. And really, like, the idea behind webhooks is not really to have your own web server and manage all that yourself most of the time. The real idea is to integrate with third parties, third party software like GitHub Actions. And the other thing you can do with webhooks is you can build tools on top of weights and biases. So webhooks is kind of a general purpose, flexible, kind of paradigm that allows you to kind of do anything you want. It's very flexible. You have to catch this payload, and based on the payload, you decide what to do. You can download the artifacts. You can do whatever you need to do. Now, launch is a bit different. So the idea is that in Weights and Biases, in response to an event, a lot of times you want to run code. And specifically, you might want to run training code or evaluation code. And you might want to do a training run. And instead of having to orchestrate this yourself, for example, having a web server, catching this kind of payload, and then managing your own queue, and then finally running some code, let's say for a training evaluation run, launch packages that up very nicely. So you don't have to, you have to do less. So in the case where you want to do a training run or evaluation run, the way that launch works is when an event is triggered, that event is put in a queue. And Weights and Biases has this queue. And essentially what you do is on the compute of your choice, whether that's a VM, a local machine, Kubernetes, so on and so forth, wherever your compute is, you run agents and agents consume from the queue. So it's a kind of a different consumer. It's constantly pulling the queue and it's grabbing things from the queue. So you don't have to manage the queue. Things can get pushed into the queue and agents can kind of pull from the queue. And that's very convenient. And the idea behind launch is is again it's meant for training and eval runs and you know agents are designed to run docker or python code on the computer of your choice. And you know if you need to do a training run or evaluation run if that's what you have in mind then you should go with launch if you are trying to create some tools on tops of weights and biases that need lots of you know you need lots of flexibility and you know you might want to do other things or you're trying to integrate with a third-party service that offers webhooks then use webhooks. I just want to summarize these points in this slide. Again, webhooks are about integrating with third-party applications. That's what they're intended for. You have to manage your own queue and you can also build custom tools and applications on top of weights and biases with webhooks. It's the most flexible general purpose sort of kind of communication pattern. It is a very general communication pattern when you have different applications communicating with each other. And there's very little or no assumptions made about your infrastructure. You build things the way you need to build it, and you have to build it yourself. Launch does a little bit more for you. So with launch, weights and biases manages the queue for you. There is a paved path for running Python and Docker code in response to events on the compute of your choice. And again, this compute of your choice is very general. You can run it anywhere, including Kubernetes and things like SageMaker for AWS and Vertex for GCP. And it's really meant for training or evaluation runs. So I hope this clarifies your mental model of when to use webhooks and when to use launch.
Automation design patterns
405
Weights & Biases
20240528
In this lesson Hamel compares Webhook vs. Launch automation and provide guidance on when to use them.
2024-06-10T12:08:52.116676
https://www.youtube.com/watch?v=opMVVu_4-Ps
Hey Noah, thanks a lot for helping me kind of do this overview of the model registry and sort of the nuts and bolts around it. You know, in the early days of Weights and Biases, I built a lot of what I would call enterprise tools myself, gluing things together. And you know, I know that you have created a lot of features now that make it really easy for people so they don't have to make their own tools. They can just have one or two lines of code and glue things together. Can you talk a little bit about some of those enterprise features or some of the notable ones that come to your mind and what what's like really popular and what's on what's on your mind yeah sure um wow where where to begin uh there's a whole host of of good stuff but i think um one of the one one of the i think most important ones to bring up that folks who are already using um artifacts might already be familiar with and it's less specific to the model registry. But, you know, we're talking about enterprise teams that typically might work, they might have all of their models and data sets stored somewhere else. And if you've been following this course and seeing, you know, the demos on model registry and you're like, that's great, but we're already storing all of our model files in S3. How is this relevant to me? I don't really want to download them, move them over to kind of locally upload them to Weights and Biases. So I'm going to pull up kind of this documentation here which talks through how you can actually track files that live not necessarily in weights and biases but externally. So you know the most common examples we see customers working with are they have their models maybe saved in S3 or any of the other major cloud providers like Azure or in GCS And you can actually just similar like how you would add a file to the artifact. If we take a look at this code snippet here, you can have your artifacts store a reference to any location inside a bucket. So that way you're not kind of dealing with moving things over to weights and biases and you're leaving everything inside the buckets as is but you're kind of adding pointers to where where those live and it's instead of doing add file or add directory then you would directly just being be adding a reference here so depending on you know which type of provider you're working with and we also work with nfs on which type of provider you're working with. And we also work with NFS drives as well. We have some great guides which specifically walks through this, but I would take a look at the documentation and that's more of a high level. Hey, you don't need to be moving stuff out of your buckets. The advantages of still using weights and biases to kind of mark your models and linking them to the registry is almost to be this layer on top of, let's say, S3, where you're able to get all of the features around automatic versioning and lineage. So that kind of graph view we talked about by using the the APIs where the the storage the underlying storage is still going to be kind of your cloud provider. Cool and then the other thing I wanted you know yeah on the the story of important features to call out for enterprise customers I also wanted to talk through protected aliases. So we spent a good amount of time talking about aliases and how they're relevant in the model registry to track the life cycle of kind of which version is in which state of the model life cycle. And so this concept of protected aliases is that you can mark a few aliases as being kind of admin only, as in not everyone will be able to add the alias production or evaluation. And typically we see customers using kind of a key set of, you know, the important aliases that mark like production, that not everyone should be able to promote a model to this state. And why is this important? Because sometimes you might have an alias like production that is actually going to kick off this automation. As we talked about, you can use an alias to kick off an automation. And so you want to make sure that not everyone in the team is, you know, adding this production alias. So that's another feature to call out to help users kind of control or help team admins control, you know, save a special set of aliases to only be added by folks that are under this role of a model registry admin, which you can see above. And that's managed inside, the docs covers this, but you can find it inside kind of the settings of your model registry.
Enterprise Model Management features
322
Weights & Biases
20240528
Tune in to Hamel and Noa sharing some tips on how to take advantage of enterprise-grade model management: -Using external files docs: https://docs.wandb.ai/guides/artifacts/track-external-files?utm_source=courses&utm_medium=courses&utm_campaign=emm#docusaurus_skipToContent_fallback -Protected aliases docs: https://docs.wandb.ai/guides/model_registry/access_controls?utm_source=courses&utm_medium=courses&utm_campaign=emm#add-protected-aliases
2024-06-10T12:10:00.524165
https://www.youtube.com/watch?v=l8pRSuU81PU
Hi everyone. So today we are going to be continuing our Zero to Hero series, and in particular today we are going to reproduce the GPT-2 model, the 124 million version of it. So when OpenAI released GPT-2, this was 2019, and they released it with this blog post. On top of that they released this paper, and on top of that they released this code on GitHub, so OpenAI slash GPTpt2. Now when we talk about reproducing gpt2 we have to be careful because in particular in this video we're going to be reproducing the 124 million parameter model. So the thing to realize is that there's always a miniseries when these releases are made. So there are the gpt2 miniseries made up of models at different sizes and usually the biggest model is called the GPT-2. But basically the reason we do that is because you can put the model sizes on the x-axis of plots like this and on the y-axis you put a lot of downstream metrics that you're interested in like translation, summarization, question answering and so on and you can chart out these scaling laws. So basically as the model size increases, you're getting better and better at downstream metrics. And so in particular for GPT-2, if we scroll down in the paper, there are four models in the GPT-2 miniseries, starting at 124 million all the way up to 1,558 million. Now the reason my numbers, the way I say them, disagree with this table is that this table is wrong. If you actually go to the GPT-2 GitHub repo, they sort of say that there was an error in how they added up the parameters, but basically this is the 124 million parameter model, etc. So the 124 million parameter had 12 layers in the transformer, and it had 768 channels in the transformer, 768 dimensions. And I'm going to be assuming some familiarity with what these terms mean because I covered all of this in my previous video, let's build GPT from scratch. So I covered that in the previous video in this playlist. Now, if we do everything correctly and everything works out well, by the end of this video, we're gonna see something like this, where we're looking at the validation loss, which basically measures how good we are at predicting the next token in a sequence on some validation data that the model has not seen during training. And we see that we go from doing that task not very well, because we're initializing from scratch, all the way to doing that task quite well by the end of the training. And hopefully we're going to beat the GPT-124M model. Now, previously when they were working on this, this is already five years ago, so this was probably a fairly complicated optimization at the time, and the GPUs and the compute was a lot smaller. Today, you can reproduce this model in roughly an hour, or probably less even, and it will cost you about 10 bucks if you want to do this on the cloud computer that you can all rent and if you pay $10 for that computer you wait about an hour or less you can actually achieve a model that is as good as this model that OpenAI released. And one more thing to mention is unlike many other models, OpenAI did release the weights for GPT-2. So those weights are all available in this repository. But the GPT-2 paper is not always as good with all of the details of training. So in addition to the GPT-2 paper, we're going to be referencing the GPT-3 paper, which is a lot more concrete in a lot of the hyperparameters and optimization settings and so on. And it's not a huge departure in the architecture from the GPT-2 version of the model. So we're going to be referencing both GPT-2 and GPT-3 as we try to reproduce GPT-2-124M. So let's go. So the first thing I would like to do is actually start at the end or at the target. So in other words, let's load the GPT-2-124M model as it was released by OpenAI and maybe take it for a spin. Let's sample some tokens from it. Now the issue with that is when you go to the code base of GPT-2 and you go into the source and you click in on the model.py you'll realize that actually this is using TensorFlow. So the original GPT-2 code here was written in TensorFlow, which is, you know, let's just say not used as much anymore. We like to use PyTorch because it's a lot friendlier, easier, and I just personally like it a lot more. The problem with that is the initial code is in TensorFlow. We'd like to use PyTorch. Instead, to get the target, we're going to use the Hugging Face Transformers code, which I like a lot more. So when you go into the Transformers, Source, Transformers, Models, GPT-2, Modeling, GPT-2.py, you will see that they have the GPT-2 implementation of that transformer here in this file. And it's like medium readable, but not fully readable. And it's like medium readable, but not fully readable. But what it does is it did all the work of converting all those weights from TensorFlow to PyTorch-friendly. And so it's much easier to load and work with. So in particular, we can look at the GPT-2 model here, and we can load it using Hugging Face Transformers. So swinging over, this is what that looks like. From Transformers, import the GPT-2 LM head model, and then from pre-trained GPT-2. Now, one awkward thing about this is that when you do GPT-2 as the model that we're loading, this actually is the 124 million parameter model. If you want the actual, the GPT-2, the 1.5 billion, then you actually want to do "-xl". So this is the 124m, our target. Now what we're doing is when we actually get this, we're initializing the PyTorch NN module as defined here in this class. From it I want to get just the state dict, which is just the raw tensors. So we just have the tensors of that file and by the way here this is a Jupyter notebook but this is Jupyter notebook running inside VS Code so I like to work with it all in a single sort of interface so I like to use VS Code so this is the Jupyter notebook extension inside VS Code so when we get the state dict, this is just a dict, so we can print the key and the value which is the tensor and let's just look at the shapes. So these are sort of the different parameters inside the GPT-2 model and their shape. So the w weight for token embedding is of size 50,257 by 768. Where this is coming from is that we have 50,257 tokens in the GPT-2 vocabulary. And the tokens, by the way, these are exactly the tokens that we've spoken about in the previous video on my tokenization series. So the previous videos, just before this, I go into the previous video on my tokenization series. So the previous videos just before this I go into a ton of detail on tokenization. GPT-2 tokenizer happens to have this many tokens. For each token we have a 768 dimensional embedding that is the distributed representation that stands in for that token. So each token is a little string piece, and then the 768 numbers are the vector that represents that token. And so this is just our lookup table for tokens. And then here we have the lookup table for the positions. So because GPT-2 has a maximum sequence length of 1,024, we have up to 1,024 positions that each token can be attending to in the past. And every one of those positions in GPD2 has a fixed vector of 768 that is learned by optimization. And so this is the position embedding and the token embedding. And then everything here is just the other weights and biases and everything else of this transformer. and then everything here is just the other weights and biases and everything else of this transformer so when you just take for example the positional embeddings and flatten it out and take just the 20 elements you can see that these are just the parameters these are weights floats just we can take and we can plot them so these are the position embeddings and we get something like this and you can see that this has structure and And it has structure because what we have here really is every row in this visualization is a different position, a fixed absolute position in the range from 0 to 1024. And each row here is the representation of that position. And so it has structure because these positional embeddings end up learning these sinusoids and cosines that sort of like represent each of these positions. And each row here stands in for that position and is processed by the transformer to recover all the relative positions and sort of realize which token is where and attend to them depending on their position, not just their content. So when we actually just look into an individual column inside these, and I just grabbed three random columns, you'll see that, for example, here we are focusing on every single channel, and we're looking at what that channel is doing as a function of position from one from zero to 1023 really and we can see that some of these channels basically like respond more or less to different parts of the position spectrum so this green channel really likes to fire for everything after 200 up to 800, but not less, but a lot less, and has a sharp drop-off here near zero. So who knows what these embeddings are doing and why they are the way they are? You can tell, for example, that because they're a bit more jagged and they're kind of noisy, you can tell that this model was not fully trained. And the more trained this model was the more you would expect to smooth this out and so this is telling you that this is a little bit of an under-trained model but in principle actually these curves don't even have to be smooth this should just be totally random noise and in fact in the beginning of the optimization it is complete random noise because this position embedding table is initialized completely at random so in the beginning you have jaggedness and the fact that you end up with something smooth is already kind of impressive, that that just falls out of the optimization, because in principle, you shouldn't even be able to get any single graph out of this that makes sense. But we actually get something that looks a little bit noisy, but for the most part looks sinusoidal-like. In the original transformer paper, the Attention is All You Need paper, the positional embeddings are actually initialized and fixed, if I remember correctly, to sinusoids and cosines of different frequencies. And that's the positional encoding, and it's fixed. But in GPT-2, these are just parameters, and they're trained from scratch, just like any other parameter. And that seems to work about as well. And so what they do is they kind of like recover these sinusoidal like features during the optimization. We can also look at any of the other matrices here. So here I took the first layer of the transformer and looking at like one of its weights and just the first block of 300 by 300 and you see some structure but like again like who knows what any of this is if you're into mechanistic interpretability you might get a real kick out of trying to figure out like what is going on what is this structure and what does this all mean but we're not gonna be doing that in this video but we definitely see that there's some interesting structure and that's kind of cool what we're mostly interested in is we've loaded the weights of this model that was released by OpenAI and now using the Hugging Face transformers we can not just get all the raw weights but we can also get the what they call pipeline and sample from it. So this is the prefix hello I'm a language model comma, and then we're sampling 30 tokens and we're getting five sequences and I ran this and this is what it produced. Hello I'm a language model, but what I'm really doing is making a human readable document. There are other languages but those are dot dot dot so you can read through these if you like but basically these are five different completions of the same prefix from this gpt2124m now if i go here i took this example from here and sadly even though we are fixing the seed we are getting different generations from the snippet than what they got so presumably the code changed but what we see though at this stage that's important is that we are getting coherent text so we've loaded the model successfully we can look at all its parameters and the keys tell us where in the model these come from and we want to actually write our own GPT-2 class so that we have full understanding of what's happening there we don't want to be working with something like the modeling GPT-2.py because it's just too complicated. We want to write this from scratch ourselves. So we're going to be implementing the GPT model here in parallel. And as our first task, let's load the GPT-2 on 24M into the class that we're going to develop here from scratch. That's going to give us confidence that we can load the OpenAI model, and therefore there's a setting of weights that exactly is the 124 model. But then, of course, what we're going to do is we're going to initialize the model from scratch instead and try to train it ourselves on a bunch of documents that we're going to get, and we're going to try to surpass that model. So we're going to get different weights, and going to try to surpass that model. So we're going to get different weights and everything's going to look different, hopefully better even, but we're going to have a lot of confidence that because we can load the OpenAI model, we are in the same model family and model class and we just have to rediscover a good setting of the weights, but from scratch. So let's now write the GPT-2 model and let's load the weights and make sure that we can also generate text that looks coherent. Okay, so let's now write the GPT-2 model, and let's load the weights, and make sure that we can also generate text that looks coherent. Okay, so let's now swing over to the Attention is All You Need paper that started everything, and let's scroll over to the model architecture, the original transformer. Now, remember that GPT-2 is slightly modified from the original transformer. In particular, we do not have the encoder. GPT-2 is a decoder-only transformer, as we call it. So this entire encoder here is missing. In addition to that, this cross attention here that was using that encoder is also missing. So we delete this entire part. Everything else stays almost the same, but there are some differences that we're going to sort of look at here. So there are two main differences. When we go to the GPT-2 paper under 2.3.model, we notice that first there's a reshuffling of the layer norms, so they change place, and second an additional layer normalization was added here to the final self-attention block. So basically all the layer norms here, instead of being after the MLP or after the attention, they swing before it and an additional layer norm gets added here right before the final classifier. So now let's implement some of the first sort of skeleton NN modules here in our GPT NN module. And in particular we're going to try to match up this schema here that is used by Hugging Face transformers, because that will make it much easier to load these weights from this state dict. So we want something that reflects this schema here. So here's what I came up with. Basically, we see that the main container here that has all the modules is called transformer. So I'm reflecting that with an nnModuleDict. And this is basically a module that allows you to index into the submodules using keys, just like a dictionary strings. Within it, we have the weights of the token embeddings, WTE, and that's an nnEmbedding. And the weights of the position embeddings which is also just an NN embedding and if you remember an embedding is really just a fancy little wrapper module around just a single single array of numbers a single block of numbers just like this it's a single tensor and an embedding is a glorified wrapper around the tensor that allows you to access its elements by indexing into the rows. Now in addition to that we see here that we have a dot h and then there's a this is indexed using numbers instead of indexed using strings so there's a dot h dot 0 1 2 etc all the way up till dot h dot 11 and that's because there are 12 layers here in this transformer so to reflect that i'm creating also an h i think that probably stands for hidden and instead of a module dict this is a model list so we can index it using integers exactly as we see here dot zero dot one two etc and the module list has uh n layer blocks and the blocks are yet to be defined in a module in a bit. In addition to that, following the GPT-2 paper, we need an additional final layer norm that we're going to put in there and then we have the final classifier, the language model head, which projects from 768, the number of embedding dimensions in this GPT, all the way to the vocab size which is 50,257 and GPT-2 uses no bias for this final sort of projection. So this is the skeleton and you can see that it reflects this. So the WTE is the token embeddings. Here it's called output embedding but it's really the token embeddings the pe is the positional encodings those two pieces of information as we saw previously are going to add and then go into the transformer the dot h is the all the blocks in gray and the lnf is this new layer that gets added here by the g-2 model and lmhead is this linear part here. So that's the skeleton of the GPT-2. We now have to implement the block. Okay so let's now recurse to the block itself. So we want to define the block. So I'll start putting them here. So the block I like to write out like this. These are some of the initializations, and then this is the actual forward pass of what this block computes. And notice here that there's a change from the transformer again that is mentioned in the GPT-2 paper. So here the layer normalizations are after the application of attention or feedforward. In addition to that, note that the normalizations are inside the residual stream you see how feed forward is applied and this arrow goes through and through the normalization so that means that your residual pathway has normalizations inside them and this is not very good or desirable you actually prefer to have a single clean residual stream all the way from supervision all the way down to the inputs the tokens and this is very desirable and nice because the gradients that flow from the top if you remember from your micro grad addition just distributes gradients during the backward stage to both of its branches equally so addition is a branch in the gradients. And so that means that the gradients from the top flow straight to the inputs, the tokens, through the residual pathways unchanged. But then in addition to that, the gradient also flows through the blocks and the blocks contribute their own contribution over time and kick in and change the optimization over time. But basically clean residual pathway is desirable from an optimization perspective. And then this is the pre-normalization version where you see that Rx first goes through the layer normalization and then the attention and then goes back out to go to the layer normalization number two and the multi-layer perceptron, sometimes also referred to as a feedforward network or an FFN. And then that goes into the residual stream again. And the one more thing that is kind of interesting to note is that, recall that attention is a communication operation. It is where all the tokens, and there's 1,024 tokens lined up in a sequence. And this is where the tokens communicate. This is where they exchange information. So attention is an aggregation function function it's a pooling function it's a weighted sum function it is a reduce operation whereas mlp this mlp here happens at every single token individually there's no information being collected or exchanged between the tokens. So the attention is the reduce and the MLP is the map. And what you end up with is that the transformer ends up just being a repeated application of map reduce, if you want to think about it that way. So this is where they communicate and this is where they think individually about the information that they gathered. And every one of these blocks iteratively refines the representation inside the residual stream. So this is our block, slightly modified from this picture. Okay so let's now move on to the MLP. So the MLP block I implemented as follows. It is relatively straightforward, we basically have two linear projections here that are sandwiched in between the GELU non-linearity. So nn.gelu approximate is 10h. Now when we swing over to the PyTorch documentation, this is nn.gelu, and it has this format, and it has two versions, the original version of GELU, which we'll step into in a bit, and the approximate version of GELU, which we can request using 10H. So as you can see, just as a preview here, GELU is basically like a ReLU, except there's no exactly flat tail here at exactly zero. But otherwise, it looks very much like a slightly smoother ReLU. It comes from this paper here, Gaussian Error Linear Units, and you can step through this paper and there's some mathematical kind of like reasoning that leads to an interpretation, at least to the specific formulation. It has to do with stochastic regularizers and the expectation of a modification to it after dropout, so you can read through all of that if you'd like here. And there's a little bit of the history as to why there isn't an approximate version of GELU and that comes from this issue here as far as I can tell. And in this issue Daniel Hendrix mentions that at the time when they developed this non-linearity the ERF function which you need to evaluate the exact GELU was very slow in TensorFlow so they ended up basically developing this approximation and this approximation, so they ended up basically developing this approximation. And this approximation then ended up being picked up by BERT and by GPT-2, etc. But today there's no real good reason to use the approximate version. You'd prefer to just use the exact version, because my expectation is that there's no big difference anymore. And this is kind of like a historical kind of quirk. But we are trying to reproduce GPT-2 exactly, and GPT-2 used the 10h approximate version, so we prefer to stick with that. Now one other reason to actually just intuitively use GELU instead of RELU is previously in the in videos in the past we've spoken about the dead ReLU neuron problem, where in this tail of a ReLU if it's exactly flat at zero any activations that fall there will get exactly zero gradient there's no change there's no adaptation there's no development of the network if any of these activations end in this flat region but the GeLU always contributes a local gradient and so there's always going to be a change always going to be an adaptation and sort of smoothing it out ends up empirically working better in practice as demonstrated in this paper and also as demonstrated by it being picked up by the BERT paper, GPT-2 paper and so on. So for that reason we adopt this non-linearity here in the GPT-2 reproduction. Now in more modern networks also like LAMA3 and so on this non-linearity also further changes to SWGLU and other variants like that, but for GPT-2 they use this approximate GELV. Okay, and finally we have the attention operation. So let me paste in my attention. So I know this is a lot, so I'm going to go through this a bit quickly, a bit slowly, but not too slowly, because we have covered this in the previous video, and I would just point you there. So this is the attention operation. Now, in the previous video, you will remember, this is not just attention. This is multi-headed attention, right? And so in the previous video, we had this multi-headed attention module. And this implementation made it obvious that these heads are not actually that complicated. There's basically in parallel inside every attention block, there's multiple heads and they're all functioning in parallel and their outputs are just being concatenated. And that becomes the output of the multi-headed attention. So the heads are just kind of like parallel streams and their outputs get concatenated. And so it was very simple and made the head be kind of like fairly straightforward in terms of its implementation. What happens here is that instead of having two separate modules and indeed many more modules that get concatenated, all of that is just put into a single self-attention module. And instead, I'm being very careful and doing a bunch of transpose split tensor gymnastics to make this very efficient impact torch. But fundamentally and algorithmically, nothing is different from the implementation we saw before in this GIF repository. So to remind you very briefly and I don't want to go in this into this in too many in too much time but we have these tokens lined up in a sequence and there's 1,020 of them and then each token at this stage of the attention emits three vectors the query key and the value and first what happens here is that the queries and the keys have to multiply each other to get sort of the attention amount, like how interesting they find each other. So they have to interact multiplicatively. So what we're doing here is we're calculating the QKV while splitting it, and then there's a bunch of gymnastics, as I mentioned here. And the way this works is that we're basically making the number of heads nh into a batch dimension and so it's a batch dimension just like b so that in these operations that follow pytorch treats b and nh as batches and it applies all the operations on all of them in parallel in both the batch and the heads. And the operations that get applied are number one the queries and the keys interact to give us our attention. This is the autoregressive mask that makes sure that the tokens only attend to tokens before them and never to tokens in the future. The softmax here normalizes the attention, so it sums to one always. And then recall from the previous video that doing the attention matrix multiplied with the values is basically a way to do a weighted sum of the values of the tokens that we found interesting at every single token. And then the final transpose contiguous and view is just reassembling all of that again. And this actually performs a concatenation operation. So you can step through this slowly if you'd like but it is equivalent mathematically to our previous implementation. It's just more efficient in PyTorch. So that's why I chose this implementation instead. Now in addition to that, I'm being careful with how I name my variables. So for example, cAten is the same as c-atten. And so actually our keys should basically exactly follow the schema of the Hugging Face Transformers code. And that will make it very easy for us to now port over all the weights from exactly this sort of naming conventions. Because all of our variables are named the same thing. But at this point, we have finished the GPT-2 implementation. And what that allows us to do is we don't have to basically use this file from Hugging Face which is fairly long. This is 2000 lines of code. Instead we just have less than 100 lines of code and this is the complete GPT-2 implementation. So at this stage we should just be able to take over all the weights, set them, and then do generation. So let's see what that looks like. Okay, so here I've also changed the GPT config so that the numbers here, the hybrid parameters, agree with the GPT-2-124M model. So the maximum sequence length, which I call block size here, is 124. The number of tokens is 5250257, which if you watch my tokenizer video, know that this is 50,000 merges, BPE merges, 256 byte tokens, the leaves of the BPE tree, and one special end of text token that delimits different documents and can start generation as well. And there are 12 layers, there are 12 heads in the attention, and the dimension of the transformers was 768. So here's how we can now load the parameters from Hugging Phase to our code here and initialize the GPT class with those parameters. So let me just copy-paste a bunch of code here. And I'm not going to go through this code too slowly, because honestly, it's not that interesting. It's not that exciting. We're just loading the weights. So it's kind of dry. But as I mentioned, there are four models in this mini series of GPT-2. This is some of the Jupyter code that we had here on the right. I'm just porting it over. These are the hyperparameters of the GPT-2 models. We're creating the config object and creating our own model. the GPT-2 models. We're creating the config object and creating our own model. And then what's happening here is we're creating the state dict, both for our model and for the Hugging Face model. And then what we're doing here is we're going over the Hugging Face model keys and we're copying over those tensors. And in the process, we are kind of ignoring a few of the buffers. They're not parameters, they're buffers. So for example, attention.bias, that's just used for the auto-aggressive mask. And so we are ignoring some of those masks and that's it. And then one additional kind of annoyance is that this comes from the TensorFlow repo, and I'm not sure how, this is a little bit annoying, but some of the weights are transposed from what PyTorch would want, and so manually I hard-coded the weights that should be transposed, and then we transpose them if that is so, and then we return this model. So the from pretrained is a constructor or a class method in Python that returns the GPT object if we just give it the model type, which in our case is GPT-2, the smallest model that we're interested in. So this is the code, and this is how you would use it. And we can pop open the terminal here in VS Code, and we can Python train GPT-2.py. And fingers crossed. python train gpt2.py and fingers crossed okay so we didn't crash and so we can load the weights and the biases and everything else into our nn module but now let's also get additional confidence that this is working and let's try to actually generate from this model okay now before we can actually generate from this model we have to be able to forward it we didn't actually write that code yet. So here's the forward function. So the input to the forward is going to be our indices, our token indices. And they are always of shape B by T. And so we have batch dimension of B and then we have the time dimension of up to T. And the T can't be more than the block size. The block size is the maximum sequence length. So B by T indices arranged as sort of like a two-dimensional layout. And remember that basically every single row of this is of size up to block size. And this is T tokens that are in a sequence, and then we have B independent sequences stacked up in a batch so that this is efficient. Now here we are forwarding the position embeddings and the token embeddings, and this code should be very recognizable from the previous lecture. So we basically use a range which is kind of like a version of range but for PyTorch and we're iterating from zero to t and creating this positions sort of indices and then we are making sure that they're on the same device as idx because we're not going to be training on only cpu that's going to be too inefficient we want to be training on gpu and that's going to come in in a bit then we have the position embeddings and the token embeddings and the addition operation of those two. Now notice that the position embeddings are going to be identical for every single row of input. And so there's broadcasting hidden inside this plus where we have to create an additional dimension here. And then these two add up because the same position embeddings apply at every single row of our examples stacked up in a batch. Then we forward the transformer blocks and finally the last layer norm and the lmhead. So what comes out after forward is the logits and if the input was b by t indices then at every single b by t we will calculate the logits for what token comes next in the sequence. So what is the token b, t plus 1, the one on the right of this token. And vocab size here is the number of possible tokens and so therefore this is the tensor that we're going to obtain. And these logits are just a softmax away from becoming probabilities. So this is the forward pass of the network, and now we can get logits, and so we're going to be able to generate from the model imminently. Okay, so now we're going to try to set up the identical thing on the left here that matches Hugging Face on the right. So here we've sampled from the pipeline, and we sampled five times up to 30 tokens with the prefix of hello on the language model, and these are the completions that we achieved. So we're going to try to replicate that on the left here. So number of turn sequences is 5, max length is 30. So the first thing we do, of course, is we initialize our model. Then we put it into evaluation mode. Now, this is a good practice to put the model into eval when you're not going to be training it. You're just going to be using it. And I don't actually know if this is doing anything right now for the following reason. Our model up above here contains no modules or layers that actually have a different behavior at training or evaluation time. So for example, dropout, batch norm, and a bunch of other layers have this kind of behavior. But all of these layers that we've used here should be identical in both training and evaluation time. So potentially model.eval does nothing but then I'm not actually sure if this is the case and maybe PyTorch internals do some clever things depending on the evaluation mode inside here. The next thing we're doing here is we are moving the entire model to CUDA. So we're moving this all of the tensors to GPU. So I'm SSH'd here to a cloud box and I have a bunch of GPUs on this box and here I'm moving the entire model and all of its members and all of its tensors and everything like that. Everything gets shipped off to basically a whole separate computer that is sitting on the GPU and the GPU is connected to the CPU and they can communicate but it's basically a whole separate computer with its own computer architecture and it's really well catered to parallel processing tasks like those of running neural networks. So I'm doing this so that the model lives on the GPU, a whole separate computer, and it's just going to make our code a lot more efficient because all of this stuff runs a lot more efficiently on the GPUs so that's the model itself. Now the next thing we want to do is we want to start with this as the prefix when we do the generation so let's actually create those prefix tokens so here's the code that I've written we're going to import the tech token library from OpenAI So here's the code that I've written. We're going to import the Tiktokin library from OpenAI, and we're going to get the GPT-2 encoding. So that's the tokenizer for GPT-2. And then we're going to encode this string and get a list of integers, which are the tokens. Now, these integers here should actually be fairly straightforward because we can just copy paste this string and we can sort of inspect what it is in T tokenizer so just pasting that in these are the tokens that are going to come out so this list of integers is what we expect tokens to become and as you recall if you saw my video of course all the tokens they're just little string chunks right so these are this is the chunkation of this string into gpt-2 tokens. So once we have those tokens, it's a list of integers, we can create a torch tensor out of it. In this case, it's eight tokens. And then we're going to replicate these eight tokens for five times to get five rows of eight tokens. And that is our initial input X, as I call it here. And it lives on the GPU as well. So x now is this idx that we can pin to forward to get our logits so that we know what comes as the sixth token uh sorry as the ninth token in every one of these five rows. And we are now ready to generate. So let me paste in one more code block here. So what's happening here in this code block is we have this x, which is of size b by t, so batch by time. And we're going to be, in every iteration of this loop, we're going to be adding a column of new indices into each one of these rows. And so these are the new indices, and we're appending them to the sequence as we're sampling. So with each loop iteration, we get one more column into x. And all of the operations happen in the context manager of torch.nograd. This is just telling PyTorch that we're not going to be calling.backward on any of this. So it doesn't have to cache all the intermediate tensors. It's not going to have to prepare in any way for a potential backward later. And this saves a lot of space and also possibly some time. So we get our logits. We get the logits at only the last location. We throw away all the other logits. We don't need them. We only care about the last column's logits. So this is being wasteful, but this is just kind of like an inefficient implementation of sampling. So it's correct, but inefficient. So we get the last column of logits, pass it through softmax to get our probabilities. Then here I'm doing top case sampling of 50, and I'm doing that because this is the HuggingFace default. So just looking at the HuggingFace docs here of a pipeline, there's a bunch of quarks that go into HuggingFace. And I mean, it's kind of a lot, honestly, but I guess the important one that I noticed is that they're using topk by default, which is 50. And what that does is that, so that's being used here as well. And what that does is basically we want to take our probabilities and we only want to keep the top 50 probabilities. And anything that is lower than the 50th probability, we just clamp to zero and renormalize. And so that way we are never sampling very rare tokens. The tokens we're going to be sampling are always in the top 50 of most likely tokens. And this helps keep the model kind of on track, and it doesn't blabber on, and it doesn't get lost, and it doesn't go off the rails as easily. And it kind of like sticks in the vicinity of likely tokens a lot better. So this is the way to do it in PyTorch, and you can step through it if you like. I don't think it's super insightful, so I'll speed through it. But roughly speaking, we get this new column of tokens we append them on x and basically the columns of x grow until this y loop gets tripped up and then finally we have an entire x of size um 5 by 30 in this case in this example and we can just basically print all those individual rows. So I'm getting all the rows, I'm getting all the tokens that were sampled, and I'm using the decode function from tiktokennizer to get back the string, which we can print. And so terminal, new terminal, and let me python train gpt2. Python train GPT-2. Okay, so these are the generations that we're getting. Hello, I'm a language model, not a program. New line, new line, etc. Hello, I'm a language model, and one of the main things that bothers me when they create languages is how easy it becomes to create something that... I mean, so this will just like blabber on right in all these cases now one thing you will notice is that these generations are not the generations of Hugging Face here and I can't find the discrepancy to be honest and I didn't fully go through all these options but probably there's something else hiding in addition to the top p so I'm not able to match it up but just for correctness down here below in the Jupyter Notebook and using the HuggingFace model so this is the HuggingFace model here I was I replicated the code and if I do this and I run that then I am getting the same results so basically the model internals are not wrong, it's just I'm not 100% sure what the pipeline does in Hugging Phase and that's why we're not able to match them up, but otherwise the code is correct and we've loaded all the tensors correctly. So we're initializing the model correctly and everything here works. So long story short, we've ported all the weights, we initialize the GPT-2, this is the exact opening of GPT-2, and it can generate sequences and they look sensible. And now, here of course we're initializing with GPT-2 model weights, but now we want to initialize from scratch, from random numbers, and we want to actually train the model that will give us sequences as good as, or better, than these ones in quality. And so that's what we turn to next. So it turns out that using the random model is actually fairly straightforward because PyTorch already initializes our model randomly and by default. So when we create the GPT model in the constructor, this is all of these layers and modules have random initializers that are there by default. So when these linear layers get created and so on, there's default constructors, for example, using the Javier initialization that we saw in the past to construct the weights of these layers. And so creating a random model instead of a GPT-2 model is actually fairly straightforward. And we would just come here and instead we would create model equals GPT and then we want to use the default config GPT config and the default config uses the 124m parameters so this is the random model initialization and we can run it and we should be able to get results now Now the results here of course are total garbage carbel and that's because this is a random model and so we're just getting all these random token string pieces chunked up totally at random. So that's what we have right now. Now one more thing I wanted to point out by the way is in case you do not have CUDA available because you don't have a GPU, you can still follow along with what we're doing here to some extent. And probably not to the very end, because by the end, we're going to be using multiple GPUs and actually doing a serious training run. But for now, you can actually follow along decently, okay? So one thing that I like to do in PyTorch is I like to auto-detect the device that is available to you. So in particular you could do that like this. So here we are trying to detect the device to run on that has the highest compute capability. You can think about it that way. So by default we start with CPU which of course is available everywhere because every single computer will have a CPU. But then we can try to detect do you have a GPU? If so use a CUDA. And then if you don't have a CUDA, do you at least have NPS? NPS is the backend for Apple Silicon. So if you have a MacBook that is fairly new, you probably have Apple Silicon on the inside. And then that has a GPU that is actually fairly capable, depending on which MacBook you have. And so you can use NPS, which will be potentially faster than CPU. And so we can print the device here. Now, once we have have the device we can actually use it in place of CUDA. So we just swap it in and notice that here when we call model on x if this x here is on CPU instead of GPU then it will work fine because here in the forward, which is where PyTorch will come, when we create a pose, we are careful to use the device of IDX to create this tensor as well. And so there won't be any mismatch where one tensor is on CPU, one is on GPU, and you can't combine those. But here we are carefully initializing on the correct device as indicated by the input to this model. carefully initializing on the correct device as indicated by the input to this model. So this will auto detect device. For me, this will be of course, GPU. So using device CUDA, but you can also run with, as I mentioned, another device and it's not gonna be too much slower. So if I override device here, oops. If I override device here, oops, if I override device equals cpu then we'll still print cuda of course but now we're actually using cpu one two three four five six okay about six seconds and actually we're not using torch compile and stuff like that which will speed up everything a lot faster as well but you can follow along even on the CPU I think to a decent extent so that's a note on that. Okay so I do want to loop around eventually into what it means to have different devices in PyTorch and what it is exactly that PyTorch does in the background for you when you do something like module.to device orice or where you take a TorchTensor and do a.toDevice and what exactly happens and how that works. But for now I'd like to get to training and I'd like to start training the model and for now let's just say the device makes code go fast and let's go into how we can actually train the model. So to train the model we're going to need some dataset and for me the best debugging simplest dataset that I like to use is the tiny Shakespeare dataset and it's available at this url so you can double you get it or you can just search tiny Shakespeare dataset and so I have in my file system is just lsinput.txt so I already downloaded it and here I'm reading the data set, getting the first 1000 characters and printing the first 100. Now remember that GPT-2 has roughly a compression ratio, the tokenizer has a compression ratio of roughly 3 to 1, so 1000 characters is roughly 300 tokens here that will come out of this in the slice that we're currently getting. So this is the first few characters and if you want to get a few more statistics on this we can do word count on input.txt so we can see that this is 40,000 lines about 200,000 words in this data set and about 1 million bytes in this file and knowing that this file is only ASCII characters there's no crazy Unicode here as far as I know, and so every ASCII character is encoded with one byte, and so this is the same number, roughly a million characters, inside this dataset. So that's the dataset size. By default, very small and minimal dataset for debugging to get us off the ground. In order to tokenize this dataset, we're going to get tiktokin encoding for GPT-2, encode the data, the first 1000 characters, and then I'm only going to print the first 24 tokens. So these are the tokens as a list of integers. And if you can read GPT-2 tokens, you will see that 198 here, you'll recognize that as the slash in character. So that is a new line. And then here, for example, we have two two new lines so that's 198 twice here so this is just a tokenization of the first 24 tokens so what we want to do now is we want to actually process these token sequences and feed them into a transformer and in particular we want them we want to rearrange these tokens into this IDX variable that we're going to be feeding into the transformer. So we don't want a single very long one-dimensional sequence. We want an entire batch where each sequence is up to, is basically T tokens, and T cannot be larger than the maximum sequence length. And then we have these T long sequences of tokens, and we have B independent examples of sequences. So how can we create a B by T tensor that we can feed into the forward out of these one-dimensional sequences? So here's my favorite way to achieve this. So if we take Torch and then we create a tensor object out of this list of integers and just the first 24 tokens, my favorite way to do this is basically you do a dot view of, for example, 4 by 6, which multiply to 24. And so it's just a two-dimensional rearrangement of these tokens. And you'll notice that when you view this one-dimensional sequence as two-dimensional 4 by 6 here, the first six tokens up to here end up being the first row, the next six tokens here end up being the second row, and so on. And so basically it's just going to stack up every six tokens in this case as independent rows, and it creates a batch of tokens in this case. And so for example, if we are token 25 in the transformer, when we feed this in and this becomes the IDX, this token is going to see these three tokens and is going to try to predict that 198 comes next. So in this way we are able to create this two-dimensional batch that's quite nice. Now in terms of the label that we're going to need for the target to calculate the loss function, how do we get that? Well we could write some code inside the forward pass because we know that the next token in a sequence which is the label is just to the right of us but you'll notice that actually we for this token at the very end 13 we don't actually have the next correct token because we didn't load it so we actually didn't get enough information here so i'll show you my favorite way of basically getting these batches and i like to personally have not just the input to the transformer which i like to call x but i also like to create the labels tensor which is of the exact same size as x but contains the targets at every single position. And so here's the way that I like to do that. I like to make sure that I fetch plus one token, because we need the ground truth for the very last token, for 13. And then when we're creating the input, we take everything up to the last token, not including, and view it as 4x6. everything up to the last token, not including, and view it as 4x6. And when we're creating targets, we do the buffer, but starting at index 1, not index 0. So we're skipping the first element, and we view it in the exact same size. And then when I print this, here's what happens, where we see that basically, as an example for this token 25, its target was 198. And that's now just stored at the exact same position in the target tensor which is 198 and also this last token 13 now has its label which is 198 and that's just because we loaded this plus one here so basically this is the way i like to do it you take long sequences you view them in two-dimensional terms so that you get batches of time, and then we make sure to load one additional token. So we basically load a buffer of tokens of b times t plus one, and then we sort of offset things and view them. And then we have two tensors. One of them is the input to the transformer, and the other, exactly, is the labels. And so let's now reorganize this code and create a very simple data loader object that tries to basically load these tokens and feed them to the transformer and calculate the loss. Okay, so I've reshuffled the code here accordingly. So as you can see here, I'm temporarily overriding to run on CPU. And importing to token. And all of this should look familiar. We're loading a thousand characters. I'm setting BT to just be four and 32 right now, just because we're debugging. We just want to have a single batch that's very small. And all of this should now look familiar and follows what we did on the right. And then here we create the model and get the logits. And so here, as you see, I already ran this. It only runs in a few seconds. But because we have a batch of 4 by 32, our logits are now of size 4 by 32 by 50,257. So those are the logits for what comes next at every position. And now we have the labels which are stored in y. So now is the time to calculate the loss and then do the backward pass and then the optimization. So let's first calculate the loss. Okay so to calculate the loss we're going to adjust the forward function of this nn module in the model and in particular we're not just going to be returning logits but also we're going to return the loss. And we're going to not just pass in the input indices, but also the targets in Y. And now we will print not logits.shape anymore, we're actually going to print the loss function. And then sys.exit of zero, so that we skip some of the sampling logic. So now let's swing up to the forward function, which gets called there, because now we also have these optional targets and when we get the targets we can also calculate the loss and remember that we want to basically return logit loss and loss by default is none but let's put this here if targets is not none then we want to calculate the loss and Copilot is already getting excited here and calculating the what looks to be correct loss it is using the cross entropy loss as is documented here so this is a function in pytorch under the functional now what is actually happening here because it looks a little bit scary, basically the F that cross entropy does not like multidimensional inputs. It can't take a B by T by vocab size. So what's happening here is that we are flattening out this three-dimensional tensor into just two dimensions. The first dimension is going to be calculated automatically, and it's going to be B times T. And then the last dimension is going to be calculated automatically and it's going to be b times t and then the last dimension is vocab size so basically this is flattening out this three dimensional tensor of logits to just be two-dimensional b times t all individual examples and vocab size on in terms of the length of each row and then it's also flattening out the targets which are also two-dimensional at this stage but we're gonna just flatten them out so they're just a single tensor of b times t and this can then pass into cross entropy to calculate a loss which we return so this should basically at this point run because it's not too complicated so let's run it and let's see if we should be printing the loss. Here we see that we printed 11 roughly. Notice that this is the tensor of a single element, which is this number 11. Now, we also want to be able to calculate a reasonable starting of starting point for a randomly initialized network. So we covered this in previous videos, but our vocabulary size is 50,257. At initialization of the network, you would hope that every vocab element is getting roughly a uniform probability, so that we're not favoring at initialization any token way too much we're not confidently wrong at initialization so we're hoping is that the probability of any arbitrary token is roughly 1 over 50,000 to 157 and now we can sanity check the loss because remember that the cross entropy loss is just basically the negative log likelihood so if we now take this probability and we take it through the natural logarithm and then we do the negative, that is the loss we expect at initialization. And we covered this in previous videos. So I would expect something around 10.82 and we're seeing something around 11. So it's not way off. This is roughly the probability I expect at initialization. So that tells me that at initialization, our probability distribution is roughly diffuse. It's a good starting point, and we can now perform the optimization and tell the network which elements should follow correctly in what order. So at this point, we can do a loss.backward, calculate the gradients, and do an optimization. So let's get to that. Okay, so let's do the optimization now. and do an optimization so let's get to that okay so let's do the optimization now so here we have the loss is this is how we get the loss but now basically we want a little for loop here so for i in range let's do 50 steps or something like that let's create an optimizer object in pytorch and so here we are using the atom optimizer which is an alternative to the Stochastic Gradient Descent Optimizer SGT that we're using so SGT is a lot simpler Atom is a bit more involved and I actually specifically like the Atom W variation because in my opinion it kind of just like fixes a bug so Atom W is a bug fix of Atom is what i would say when we go to the documentation for atom w oh my gosh we see um that it takes a bunch of hyperparameters and it's a little bit more complicated than the sgd we were looking at before because in addition to basically updating the parameters with the gradient scaled by the learning rate it keeps these buffers around and it keeps two buffers, the M and the V, which it calls the first and the second moment. So something that looks a bit like momentum, something that looks a bit like RMS prop, if you're familiar with it. But you don't have to be, it's just kind of like a normalization that happens on each gradient element individually and speeds up the optimization, especially for language models. But I'm not going to go into the detail right here. We're going to treat this a bit of a black box and it just optimizes the objective faster than SGD, which is what we've seen in the previous lectures. So let's use it as a black box in our case, create the optimizer object, and then go through the optimization. and then go through the optimization. The first thing to always make sure, the copilot did not forget to zero the gradients. So always remember that you have to start with a zero gradient. Then when you get your loss and you do a dot backward, dot backward adds to gradients. So it deposits gradients. It always does a plus equals on whatever the gradients are which is why you must set them to zero so this accumulates the gradient from this loss and then we call the step function on the optimizer to update the parameters and to decrease the loss and then we print the step and the loss dot item is used here because loss is a tensor with a single element dot item will actually convert that to a single float and this float will live not will live on the cpu so this gets to some of the internals again of the devices but loss is a is a tensor with a single element and it lives on gpu for me because i'm using gpus when you call that item pytorch behind the scenes will take that one-dimensional tensor ship it back to the cpu uh memory and convert it into a float that we can just print so this is the optimization and this should probably just work let's see Actually, sorry, let me instead of using CPU override, let me delete that so this is a bit faster for me and it runs on CUDA. Expected all tensors to be on the same device but found at least two devices, CUDA 0 and CPU. So CUDA0 is the zeroth GPU because I actually have eight GPUs on this box. So the zeroth GPU on my box and CPU. And model we have moved to device but when I was writing this code I actually introduced a bug because buff we never moved to device. And you have to be careful because you can't just do buff.to of device it's not stateful it doesn't convert it to be a device it instead returns pointer to a new memory which is on the device so you see how we can just do model.to a device that does not apply to tensors you have to do buff equals buff.toDevice and then this should work okay so what do we expect to see we expect to see a reasonable loss in the beginning and then we continue to optimize just a single batch and so we want to see that we can overfit this single batch we can we can crush this little batch and we can perfectly predict the indices on just this little batch and in these that is roughly what we're seeing here so we started off at roughly 10.82 11 in this case and then as we continue optimizing on this single batch without loading new examples we are making sure that we can overfit a single batch and we are getting to very very low loss so the transformer is memorizing this single individual batch. And one more thing I didn't mention is the learning rate here is 3E-4, which is a pretty good default for most optimizations that you want to run at a very early debugging stage. So this is our simple inner loop, and we are overfitting a single batch, and this looks good. So now what comes next is we don't just want to overfit a single batch. We actually want to do an optimization. So we actually need to iterate these XY batches and create a little data loader that makes sure that we're always getting a fresh batch and that we're actually optimizing a reasonable objective. So let's do that next. Okay, so this is what I came up with, and I wrote a little data loader light. So what this data loader light. So what this data loader does is we're importing the token up here, reading the entire text file from this single input.txt, tokenizing it, and then we're just printing the number of tokens in total and the number of batches in a single epoch of iterating over this dataset. So how many unique batches do we output before we loop back around the beginning of the document and start reading it again? So we start off at position zero, and then we simply walk the document in batches of b times t. So we take chunks of b times t, and then always advance by b times t. And it's important to note that we're always advancing our position by exactly b times t but when we're fetching the tokens we're actually fetching from current position to b times t plus one and we need that plus one because remember we need the target token for the last token in the current patch and so that way we can do the x y exactly as we did it before That way we can do the XY exactly as we did it before. And if we are to run out of data, we'll just loop back around to zero. So this is one way to write a very, very simple data loader that simply just goes through the file in chunks and is good enough for us for current purposes. And we're gonna complexify it later. And now we'd like to come back around here and we'd like to actually use our data loader. So the import tick token has moved up and actually all of this is now useless. So instead we just want a train loader for the training data and we want to use the same hyperparameters for four. So batch size was four and time was 32. And then here we we need to get the XY for the current batch. So let's see if Copilot gets it, because this is simple enough. So we call the next batch, and then we make sure that we have to move our tensors from CPU to the device. So here, when I converted the tokens, notice that I didn't actually move these tokens to the device. So here when I converted the tokens, notice that I didn't actually move these tokens to the GPU, I left them on the CPU which is default and that's just because I'm trying not to waste too much memory on the GPU. In this case this is a tiny dataset that it would fit but it's fine to just ship it to GPU right now for our purposes right now. So we get the next batch, we keep the data loader simple CPU class, and then here we actually ship it to the GPU and do all the computation. And let's see if this runs. So Python train gpt2.py. And what do we expect to see before this actually happens? What we expect to see is now we're actually getting the next batch. So we expect to not overfit a single batch. And so I expect our loss to come down, but not too much. And that's because I still expect it to come down because in the 50,257 tokens, many of those tokens never occur in our dataset. So there are some very easy gains to be made here in the optimization by, for example, taking the biases of all the logits that never occur and driving them to negative infinity. And that would basically just, it's just that all of these crazy unit codes or different languages, those tokens never occur. So their probability should be very low. And so the gains that we should be seeing are along the lines of basically deleting the usage of tokens that never occur. That's probably most of the loss gain that we're going to see at this scale right now. But we shouldn't come to zero because we are only doing 50 iterations and I don't think that's enough to do an epoch right now. So let's see what we got. We have 338,000 tokens, which makes sense with our 3 to 1 compression ratio because there are 1 million characters. So one epoch with the current setting of B and T will take 2,600 batches. And we're only doing 50 batches of optimization in here. So we start off in a familiar territory, as expected, and then we seem to come down to about 6.6. So basically things seem to be working okay right now with respect to our expectations. So that's good. Okay next I want to actually fix a bug that we have in our code. It's not a major bug but it is a bug with respect to how GPT-2 training should happen. is a bug with respect to how GPT-2 training should happen. So the bug is the following. We were not being careful enough when we were loading the weights from Hugging Phase and we actually missed a little detail. So if we come here, notice that the shape of these two tensors is the same. So this one here is the token embedding at the bottom of the transformer. So this one here is the token embedding at the bottom of the transformer. And this one here is the language modeling head at the top of the transformer. And both of these are basically two-dimensional tensors, and their shape is identical. So here, the first one is the output embedding, the token embedding, and the second one is this linear layer at the very top, the classifier layer. Both of them are of shape 50,000 to 57 by 768. This one here is giving us our token embeddings at the bottom, and this one here is taking the 768 channels of the transformer and trying to upscale that to 50,000 to 57 to get the logis for the next token. So they're both the same shape, but more than that, actually, if you look at comparing their elements, in PyTorch, this is an element wise equality. So then we use.all, and we see that every single element is identical. And more than that, we see that if we actually look at the data pointer, this is a way in PyTorch to get the actual pointer to the data and the storage. We see that actually the pointer is identical. So not only are these two separate tensors that happen to have the same shape and elements, they're actually pointing to the identical tensor. So what's happening here is that this is a common wait time scheme. So what's happening here is that this is a common wait time scheme that actually comes from the original Attention is All You Need paper and actually even the reference before it. So if we come here, Embeddings and Softmax in the Attention is All You Need paper, they mentioned that in our model, we shared the same weight matrix between the two embedding layers and the pre-Softmax linear transformation similar to 30. So this is an awkward way to phrase that these two are shared and they're tied and they're the same matrix. And the 30 reference is this paper. So this came out in 2017. And you can read the full paper, but basically it argues for this weight-signing scheme. And I think intuitively the idea for why you might want to do this comes from this paragraph here. And basically you can observe that you actually want these two matrices to behave similar in the following sense. If two tokens are very similar semantically, like maybe one of them is all lowercase and the other one is all uppercase or it's the same token in the different language or something like that, if you have similarity between two tokens, presumably you would expect that they are nearby in the token embedding space. But in the exact same way, you'd expect that if you have two tokens that are similar semantically, you'd expect them to get the same probabilities at the output of a transformer, because they are semantically similar. And so both positions in the transformer at the very bottom and at the top have this property that similar tokens should have similar embeddings or similar weights. And so this is what motivates their exploration here. And they kind of, you know, I don't want to go through the entire paper and you can go through it, but this is what they observe. They also observe that if you look at the output embeddings, they also behave like word embeddings. If you just kind of try to use those weights as word embeddings. So they kind of observe this similarity, they try to tie them and they observe that they can get much better performance in that way. And so this was adopted in the attention is on meat paper and then it was used again in GPT-2 as well. So I couldn't find it in the Transformers implementation. I'm not sure where they tie those embeddings, but I can find it in the original GPT-2 code introduced by OpenAI. So this is OpenAI GPT-2 source model. And here where they are forwarding this model, and this is in TensorFlow, but that's okay. We see that they get the wte token embeddings and then here is the encoder of the token embeddings and the position and then here at the bottom they use the wte again to do the logits so when they get to logits it's a matmul of this output from the transformer and the WTE tensor is reused. And so the WTE tensor basically is used twice on the bottom of the transformer and on the top of the transformer and in the backward pass we'll get gradients contributions from both branches right and these gradients will add up on the WTE tensor. So we'll get a contribution from the classifier layer and then at the very end of the transformer we'll get a contribution at the bottom of it, flowing again into the WTE tensor. So we are currently not sharing WTE in our code, but we want to do that. So so weight sharing scheme and one way to do this let's see if copilot gets it oh it does okay so this is one way to do it basically relatively straightforward what we're doing here is we're taking the wte.weight and we're simply redirecting it to point to the lmhead. So this basically copies the data pointer, right? It copies the reference. And now the wte.wait becomes orphaned, the old value of it, and PyTorch will clean it up. Python will clean it up. And so we are only left with a single tensor, and it's going to be used twice in the forward pass. And this is, to my knowledge, all that's required. So we should be able to use this, and this should probably train. We're just going to basically be using this exact same sensor twice. And we weren't being careful with tracking the likelihoods but according to the paper and according to the results you'd actually expect slightly better results doing this and in addition to that one other reason that this is very very nice for us is that this is a ton of parameters right what is the size of here it's 768 times 50,257 so this is 40 million parameters and this is a 124 million parameter model so 40 divide 124 so this is like 30 percent of the parameters are being saved using this weight time scheme and so this might be one of the reasons that this is working slightly better if you're not training the model long enough because of the wait time you don't have to train as many parameters and so you become more efficient in terms of the training process because you have fewer parameters and you're putting in this inductive bias that these two embeddings should share similarities between tokens so this is the wait time scheme and we've saved a ton of parameters and we expect our model to work slightly better because of the scheme. Okay next I would like us to be a bit more careful with the initialization and to try to follow the way GPT-2 initialized their model. Now unfortunately the GPT-2 paper and the GPT-3 paper are not very explicit about initialization so we kind of have to read between the lines and instead of going to the paper which is quite vague there's a bit of information in the code that OpenAI released so when we go to the model.py we see that when they initialize their weights they are using the standard deviation of 0.02 and that's how they so this is a normal distribution for the weights and the standard deviation is 0.02 for the weights and the standard deviation is 0.02. For the bias they initialize that with 0 and then when we scroll down here why is this not scrolling? The token embeddings are initialized at 0.02 and position embeddings at 0.01 for some reason. So those are the initializations and we'd like to mirror that in GPT-2 in our module here. So here's a snippet of code that I sort of came up with very quickly. So what's happening here is at the end of our initializer for the GPT module, we're calling the apply function of an end module and that iterates all the sub modules of this module and applies init weights function on them and so what's happening here is that we're in we're iterating all the modules here and if they are an nn.linear module then we're gonna make sure to initialize the weight using a normal with the standard deviation of 0.02 if there's a bias in this layer we will make sure to initialize that to zero. Note that zero initialization for the bias is not actually the PyTorch default. By default the bias here is initialized with a uniform, so that's interesting. So we make sure to use zero and for the embedding we're just going to use 0.02 and keep it the same. So we're not going to change it to 0.01 for positional because it's about the same. And then if you look through our model, the only other layer that requires initialization and that has parameters is the layer norm. And the PyTorch default initialization sets the scale in the layer norm to be one and the offset in the layer norm to be zero. So that's exactly what we want. And so we're just going to keep it that way. And so this is the default initialization if we are following the GPT-2 source code that they released. I would like to point out by the way that typically the standard deviation here on this initialization if you follow the Javier initialization, would be 1 over the square root of the number of features that are incoming into this layer. But if you'll notice, actually, 0.02 is basically consistent with that, because the D model sizes inside these transformers for GPT-2 are roughly 768, 1,600, etc. So 1 over the square root of, for example, 768 gives us 0.03. If we plug in 1,600 1600 we get 0.02 if we plug in three times that 0.014 etc so basically 0.02 is roughly in the vicinity of reasonable values for these initializations anyway so it's not completely crazy to be hard coding 0.02 here, but you'd like typically something that grows with the model size instead, but we will keep this because that is the GPT-2 initialization per their source code. But we are not fully done yet on initialization because there's one more caveat here. So here, a modified initialization which accounts for the accumulation on the residual path with model depth is used. We scale the weight of residual layers initialization by a factor of one over square root of n, where n is the number of residual layers. So this is what GPT-2 paper says. So we have not implemented that yet and we can do so now. Now I'd like to actually kind of like motivate a little bit what they mean here I think. So here's roughly what they mean. If you start out with zeros in your residual stream remember that each residual stream is of this form where we continue adding to it. X is x plus something, some kind of contribution. So every single block of the residual network contributes some amount and it gets added. And so what ends up happening is that the variance of the activations in the residual stream grows. So here's a small example if we start at zero and then we for 100 times we have sort of this residual stream of 768 zeros and then 100 times we add random which is a normal distribution zero mean one standard deviation if we add to it then by the end the residual stream has grown to have standard deviation of 10 and that's just because we're always adding these numbers. And so this scaling factor that they use here exactly compensates for that growth. So if we take n and we basically scale down every one of these contributions into the residual stream by 1 over the square root of n. So 1 over the square root of n is n to the negative 0.5, right? Because n to the 0.5 is the square root and then 1 over the square root is n negative 0.5. If we scale it in this way then we see that we actually get 1. So this is a way to control the growth of activations inside the residual stream in the forward pass and so we'd like to initialize in the same way where these weights that are at the end of each block so this Cproj layer the GPT paper proposes to scale down those weights by 1 over the square root of the number of residual layers. So one crude way to implement this is the following. I don't know if this is PyTorch sanctioned but it works for me. We'll do in the initialization, see that special nano GPT scale init is one. So we're setting kind of like a flag for this module. There must be a better way in PyTorch, right? But I don't know. Okay, so we're basically attaching this flag and trying to make sure that it doesn't conflict with anything previously. And then when we come down this std should be 0.02 by default but then if hasAtter module of this thing then std times equals Copilot is not guessing correctly so we want one over the square root of the number of layers. So the number of residual layers here is twice times self.config layers and then this times negative 0.5. So we want to scale down that standard deviation and this should be correct and implement that. I should clarify by the way that the two times number of layers comes from the fact that the two times number of layers comes from the fact that every single one of our layers in the transformer actually has two blocks that add to the residual pathway, right? We have the attention and then the MLP. So that's where the two times comes from. And the other thing to mention is that what's slightly awkward, but we're not going to fix it, is that because we are weight sharing the WTE and the LM head, in this iteration of our old submodules, we're going to actually come around to that tensor twice. So we're going to first initialize it as an embedding with 0.02, and then we're going to come back around it again in a linear and initialize it again using 0.02. And it's going to be 0.02 because the LM head is of course not scaled. So it's not going to come here. It's just, it's going to be basically initialized twice using the identical same initialization, but that's okay. And then scrolling over here, I added some code here so that we have reproducibility to set the seeds. And now we should be able to Python train gpt2.py and let this running and as far as I know this is the gpt2 initialization in the way we've implemented it right now so this looks reasonable to me okay so at this point we have the gpt2 model we have some confidence that it's correctly implemented we've initialized it properly and we have a data loader that's iterating through data batches and we can train. So now comes the fun part. I'd like us to speed up the training by a lot so we're getting our money's worth with respect to the hardware that we are using here. And we're going to speed up the training by quite a bit. Now you always want to start with what hardware do you have, what does it offer, and are you fully utilizing it? So in my case, if we go to NVIDIA SMI, we can see that I have eight GPUs, and each one of those GPUs is an A100 SXM 80 GB. So this is the GPU that I have available to me in this box. Now when I look, when I use to spin up these kinds of boxes, by the way, my favorite place to go to is Lambda Labs. They do sponsor my development and that of my projects, but this is my favorite place to go. And this is where you can spin up one of these machines and you pay per hour and it's very, very simple. So I like to spin them up and then connect VSCO to it. And that's how I develop. Now, when we look at the A100s that are available here, A100 80 gigabyte SXM is the GPU that I have here. And we have a bunch of numbers here for how many calculations you can expect out of this GPU. So when I come over here and I break in right after here, so Python train gpt. So I'm breaking in right after we calculate the logits and the laws. And the interesting thing I'd like you to note is when I do logits.dtype, this prints a torch.float32. So by default in PyTorch, when you create tensors, and this is the case for all the activations and for the parameters of the network and so on, by default, everything is in float32. That means that every single number, activation or weight, and so on, is using a float representation that has 32 bits. And that's actually quite a bit of memory. And it turns out empirically that for deep learning as a computational workload, this is way too much. And deep learning and the training of these networks can tolerate significantly lower precisions. Not all computational workloads can tolerate small precision. So for example, if we go back to the data sheet, you'll see that actually these GPUs support up to FP64 and this is quite useful I understand for a lot of scientific computing applications and there they really need this but we don't need that much precision for deep learning training. So currently we are here FP32 and with this code as it is right now we expect to get at most 19.5 teraflops of performance. That means we're doing 19.5 trillion operations floating point operations so this is floating point multiply add most likely. And so these are the floating point operations. Now, notice that if we are willing to go down in precision, so TF32 is a lower precision format we're going to see in a second, you can actually get an 8x improvement here. And if you're willing to go down to float 16 or be float 16, you can actually get times 16x performance, all the way to 312 teraflops. You see here that NVIDIA likes to cite numbers that have an asterisk here. This asterisk says with sparsity but we are not going to be using sparsity in our code and I don't know that this is very widely used in the industry right now so most people look at this number here without sparsity. and you'll notice that we could have got even more here but this is int8 and int8 is used for inference not for training because int8 has a it basically has uniform spacing and we actually require a float so that we get a better match to the normal distributions that occur during training of neural networks, where both activations and weights are distributed as a normal distribution. And so floating points are really important to match that representation. So we're not typically using int8 for training, but we are using it for inference. And if we bring down the precision, we can get a lot more teraflops out of the tensor cores available in the GPUs. We'll talk about that in a second. But in addition to that, if all of these numbers have fewer bits of representation, it's going to be much easier to move them around. And that's where we start to get into the memory bandwidth and the memory of the model. So not only do we have a finite capacity of the number of bits that our GPU can store, but in addition to that, there's a speed with which you can access this memory. And you have a certain memory bandwidth. It's a very precious resource. And in fact, many of the deep learning workloads for training are memory bound. And what that means is actually that the tensor cores that do all these extremely fast multiplications, most of the time they're waiting around, they're idle. Because we can't feed them with data fast enough. We can't load the data fast enough from memory. Typical utilizations of your hardware, if you're getting 60 percent utilization, you're actually doing extremely well. So half of the time in a well-tuned application, your tensor cores are not doing multiplies because the data is not available. So the memory bandwidth here is extremely important as well. And if we come down in the precision for all the floats, all the numbers, weights, and activations suddenly require less memory. So we can store more and we can access it faster so everything speeds up and it's amazing and now let's reap the benefits of it and let's first look at the tensor float 32 format okay so first of all what are tensor cores well tensor cores tensor core is just an instruction in the a100 architecture right So what it does is it does basically a little 4x4 matrix multiply. So this is just matrix multiplication here of 4x4 matrices and there are multiple configurations as to what precision any of these matrices are, in what precision the internal accumulate happens, and then what is the output precision, input precision, etc. So there's a few switches, but it's basically a 4x4 multiply. And then any time we have any operations that require matrix multiplication, they get broken up into this instruction of little 4x4 multiply. And so everything gets broken up into this instruction because it's the fastest way to multiply matrices matrices and it turns out that most of the computational work that we're doing up above all of it really is matrix multiplication most of the work computationally happens in the linear layers linear linear etc there's a few things sandwiched in between so there's some additions in residuals there's some galu non-linearities there's some layer norms sandwiched in between. So there's some additions in residuals, there's some Gellu nonlinearities, there's some layer norms, etc. But if you just time them, you'll see that these are nothing. Like basically the entire transformer is just a bunch of matrix multiplications really. And especially at this small scale, 124 million parameter model, actually the biggest matrix multiplication by far is the classifier layer at the top. That is a massive matrix multiply of going from 768 to 50,257 and that matrix multiply dominates anything else that happens in that network, roughly speaking. So it's matrix multiplies that become a lot faster which are hidden inside our linear layers and they're accelerated through tensor cores. Now the best reference I would say for tensor cores lot faster which are hidden inside our linear layers and they're accelerated through TensorCourse. Now the best reference I would say for TensorCourse is basically just go to the A100 architecture white paper and then it's pretty detailed and but I think people it's like relatively readable mostly if you half understand what's happening. So figure 9 tensor float 32. So this is the explanation basically for TF32 and what happens here. And you see that there's many configuration options here available. So the input operands and what precisions are they in, the accumulator and what basically the internal representation within the instruction when you do the accumulate of this matrix multiplication so the intermediate plus equals of the intermediate little vector multiplies here that all happens in fp32 and then this is an A-tax improvement as I mentioned to the hops that we get so tf32 specifically we're looking at this row here. And the way this works is normally fb32 has 32 bits. tf32 is the exact same bits. We have one sign bit. We have eight exponent bits, except the mantissa bits get cropped in the float and so basically we end up with just 19 bits instead of 32 bits because the last 13 bits get truncated they get dropped and all this is internal to the instruction so none of it is visible to anything in our pytorch none of our pytorch code will change all of the numbers will look identical. It's just that when you call the Tensor Core instruction internally in the hardware, it will crop out these 13 bits and that allows it to calculate this little matrix multiply significantly faster, 8x faster. Now of course this speedup comes at a cost and the cost is that we are reducing the precision. Our accumulate is still in FP32, our output is FP32, our inputs are FP32 but internally things get truncated in the operands to perform the operation faster and so our results are starting to be a bit more approximate but empirically when you actually train with this you basically can't tell the difference. So the reason I like TF32 is because if you can tolerate a little bit of a precision fudge Then this is free like none of your code sees this It's fully internal to the operation and the operation to you just go 8x faster and it's a bit more approximate And so it's a pretty sweet spot. I would say in optimization and Let's see what that looks like first. So I've set up our codes to just time the iterations. So import time. I changed the hyperparameters so that we have something a bit more that reflects kind of workload that we want to run, because we want to do a fairly large run at the end of this. So let's use batch size 16. And let's now use the actual GPT-2 maximum sequence length of 1024 tokens. So this is the configuration and then for 50 iterations I'm just doing something very lazy here. I'm doing time.time to get the current time and then this is the optimization loop and now I want to time how long this takes. Now, one issue with working with GPUs is that as your CPU, when your CPU runs, it's just scheduling work on GPU. It's ordering some work, right? And so it sends a request, and then it continues running. request and then it continues running. And so it can happen sometimes that we sort of speed through this and we queue up a lot of kernels to run on the GPU and then the CPU sort of like gets here and takes time to time but actually the GPU is still running because it takes a time to actually work through the work that was scheduled to run. And so you're just building up a queue for the GPU and And so actually if you need to, you want to wait for Chatku to synchronize, and this will wait for the GPU to finish all the work that was scheduled to run up above here, and then we can actually take the time. So basically we're waiting for the GPU to stop this iteration, take the time, and then we're going to just print it so here I'm going to run the training loop and here on the right I'm watching nvidia smi so we start off at zero we're not using the gpu and then by default pytorch will use gpu zero so we see that it gets filled up and we're using 35 gigabytes out of 80 gigabytes available and then then here on the left, we see that because we've cranked up the batch size, now it's only 20 batches to do a single epoch on our tiny Shakespeare. And we see that we're seeing roughly a thousand milliseconds per iteration here, right? So the first iteration sometimes is slower and that's because PyTorch might be doing a lot of initializations here on the very first iteration. And so it's probably initializing all these tensors and buffers to hold all the gradients. And I'm not 100% sure all the work that happens here, but this could be a slower iteration. When you're timing your logic, you always want to be careful with that. But basically we're seeing 1,000 milliseconds per iteration. And so this will run for roughly 50 seconds as we have it right now. So that's our baseline in float 32. One more thing I wanted to mention is that if this doesn't fit into your GPU and you're getting out of memory errors, then start decreasing your batch size until things fit. So instead of 16, try 8 or 4 or whatever you need to fit the batch into your GPU. And if you have a bigger GPU, you can actually potentially get away with 32 and so on. By default, you want to basically max out the batch size that fits on your GPU. And you want to keep it nice numbers. So use numbers that have lots of powers of 2 in them. So 16 is a good number. 8, 24, 32, 48. These are nice numbers, but don't use something like 17 because that will run very inefficiently on the GPU. And we're going to see that a bit later as well. So for now, let's just stick with 16, 1024. And the one thing that I added also here, and I ran it again, is I'm calculating tokens per second throughput during training. Because we might end up changing the batch size around over time, but tokens per second is the objective measure that we actually really care about. How many tokens of data are we training on, and what is the throughput of tokens that we're getting in our optimization? So right now, we're processing and training on 163,000 tokens per second, roughly. And that's a bit more objective metric. OK, so let's now enable TF32. Now, luckily, PyTorch makes this fairly easy for us. And to enable TF32, you just need to do a single line. And it's this. And when we go to the PyTorch documentation here for this function basically this tells PyTorch what kind of kernels to run and by default I believe it is highest highest precision for matmul and that means that everything happens in float32 just like it did before but if we set it to high as we do right, matrix multiplications will now use tensorflow32 when it's available. My GPU is the A100, so it's an ampere series and therefore tf32 is available. If you have an older GPU this might not be available for you, but for my GPU it's available and so what I expect PyTorch to do is that every single place where we see an nn.linear, inside there, there's a matrix multiplication, and I expect that matrix multiplication now to be running on tensor cores, utilizing the TF32 precision. So this is the single line of change that is, I believe, necessary, and let's rerun this. Now we saw that in terms of the throughput that is promised to us, we're supposed to be getting 8x roughly. So let's see what happens. And that 8x came from here, right? 8x. And it also came from looking at it here, 156 tflops instead of 19.5. Okay, so what actually happened? So we're seeing that our throughput roughly 3x'd not 8x'd. So we are going from 1000 milliseconds we're going down to 300 milliseconds and our throughput is now about 50,000 tokens per second. So we have a roughly 3x instead of 8x. So what happened? And basically what's happening here is, again, a lot of these workloads are memory bound. And so even though the TF32 offers, in principle, a lot faster throughput, all of these numbers everywhere are still float32s. And it's float32 numbers that are being shipped all over the place through the memory system and it's just costing us way too much time to shuttle around all this data and so even though we've made the multiply itself much faster we are memory bound and we're not actually seeing the full benefit that would come from this napkin math here. That said we are getting a 3x faster throughput and this is free. Single line of code in PyTorch, all your variables are still float32 everywhere, it just runs faster and it's slightly more approximate but we're not gonna notice it basically. So that's tf32. Okay so let's now continue. So we've exercised this row and we saw that we can crop out some of the precision inside the operation itself. But we saw that we're still memory bound. We're still moving around all these floats, right? Otherwise, and we're paying that cost because of this. So let's not decrease the amount of stuff that we're going to be moving around. And we're going to do that by dropping down to bfloat16. So we're only going to be maintaining 16 bits per float. And we're going to use to bfloat16. So we're only going to be maintaining 16 bits per float. And we're going to use the bfloat16, and I'll explain in a bit fp16 difference, and we're going to be in this row. So when we go back to the documentation here for the A100, we see here the precisions that are available, and this is the original FP32. The TF32 crops out the precision. And then here in BF16, you see that it is very similar to TF32, but it's even more aggressive in cropping off of the precision, the mantissa of this float. So the important thing with BF16 is that the exponent bits and the sign bit, of course, remain unchanged. So if you're familiar with your float numbers, and I think this should probably be an entire video by itself, the exponent sets the range that you can represent of your numbers. And the precision is how much precision you have for your numbers. you have for your numbers. And so the range of numbers is identical, but we have fewer possibilities within that range because we are truncating the mantissa. So we have less precision in that range. What that means is that things are actually fairly nice because we have the original range of numbers that are representable in float, but we just have less precision for it. And the difference with FP16 is that they actually touch and change the range. So FP16 cannot represent the full range of FP32. It has a reduced range. And that's where you start to actually run into issues because now you need these gradient scalars and things like that and I'm not going to go into the detail of that in this video because that's a whole video by itself but FP16 actually historically came first that was available in the Volta series before Ampere and so FP16 came first and everyone started to train in FP16 but everyone had to use all these gradient scaling operations which are kind of annoying and it's an additional source of state and complexity. And the reason for that was because the exponent range was reduced in FP16. So that's the IEEE FP16 spec. And then they came out with BF16 and the Ampere, and they made it much simpler because we're just truncating Matissa, we have the exact same range, and we do not need gradient scalers. So everything is much, much simpler. We have the exact same range and we do not need gradient scalars. So everything is much, much simpler. Now, when we do use BF16, though, we are impacting the numbers that we might be seeing in our PyTorch code. This change is not just local to the operation itself. So let's see how that works. There's some documentation here. So I think this is probably the best page to explain how to use Mixed Precision in PyTorch. Because there are many other tutorials and so on, even within PyTorch documentation, that are a lot more confusing. And so I recommend specifically this one. Because there's five other copies that I would not recommend. And then when we come here, ignore everything about everything. Ignore everything about gradient scalars and only look at torch.autocast and basically also this comes to a single line of code at the end so this is the context manager that we want and we want to use that in our network when you you click into the Torch.AutoCast, AutoCasting, it has a few more, a bit more guideline for you. So it's telling you do not call bfloat16 on any of your tensors, just use AutoCast and only surround the forward pass of the model and the loss calculation and that's the only two things that you should be surrounding. Leave the backward and the the optimizer step alone so that's the guidance that comes from the PyTorch team so we're going to follow that guidance and for us because the loss calculation is inside of the model forward pass for us we are going to be doing this and then we don't want to be using torch float 16 because if we do that we need to start using gradient scalars as well so we are going to be using b float 16. this is only possible to do in ampere but this means that the changes are extremely minimal like basically just this one line of code um let me first break in to here before we actually run this so right right after logins, I'd like to show you that different from the tf32 that we saw, this is actually going to impact our tensors. So this logins tensor, if we now look at this and we look at the dtype, we suddenly see that this is now bfloat16. It's not float32 anymore, so our activations have been changed. The activations tensor is now pfloat16, but not everything has changed. So model.transformer.wte, this is the weight token embedding table, it has a dot weight inside it, and the dtype of this weight this parameter is still torch float 32. so our parameters seem to still be in float 32 but our activations the logits are now in b float 16. so clearly this is why we get the mixed precision some things pytorch is keeping in float 32 some things pytorch is converting to lower precision and what gets converted at what point is not super clear. I remember scrolling down... is it here? Okay, I can't find it. I thought it was here. Okay, there we go. So there are a few docs on when you're using this autocast what gets converted to b float 16 and and when so for example only these matrix multiply like operations get converted to b float 16 but a lot of operations remain in float 32 so in particular a lot of normalizations like layer norms and things like that not all of those layers might be converted. So only some layers selectively would be running BFloat16. But things like softmax, layer norms, log softmax, so loss function calculations, a lot of those things might remain in float32 because they are more susceptible to precision changes. Major x multiplies are fairly robust to precision changes. So some parts of the network are impacted more or less by the precision change. So basically only some parts of the model are running in reduced precision. Let's take it for a spin and let's actually see what kind of improvement we achieve here. kind of improvement we achieve here. Okay, so we used to be 333 milliseconds, we're now 300, and we used to be somewhere around 50,000 tokens per second, we're now 55. So we're definitely running faster, but maybe not a lot faster, and that's because there are still many many bottlenecks in our GPT-2. We're just getting started but we have dropped down the precision as far as we can with my current GPU which is A100. We're using PyTorch AutoCast. Unfortunately I don't actually exactly know what PyTorch AutoCast does. I don't actually know exactly what's in bfloat16, what's in float32. We could go in and we could start to scrutinize it. But these are the kinds of rules that PyTorch has internally and unfortunately they don't document it very well. So we're not going to go into that in too much detail. But for now we are training in bfloat16. We do not need a gradient scaler and the reason things are running faster is because we are able to run tensor cores in BFloat 16 now. That means we are in this row, but we are also paying in precision for this. So we expect slightly less accurate results with respect to the original FP32. But empirically, in many cases, this is a worth it kind of trade-off because it allows you to run faster. And you could, for example, train longer and make up for that precision decrease. So that's B416 for now. Okay, so as we can see, we are currently at about 300 milliseconds per iteration. And we're now going to reach for some really heavy weapons in the PyTorch arsenal. And in particular, we're going to introduce Torch.compile. So Torch.compile is really quite incredible infrastructure from the PyTorch team and it's basically a compiler for neural networks like it's almost like GCC for cnc++ code this is just this GCC of neural nets. So came out a while ago and extremely simple to use. The way to use Torch Compile is to do this. It's a single line of code to compile your model and return it. Now this line of code will cost you compilation time, but as you might guess, it's going to make the code a lot faster. So let's actually run that because this will take some time to run. But currently remember, we're at 300 milliseconds and we'll see what happens. Now, while this is running, I'd like to explain a little bit of what Torch.compile does under the hood. So feel free to read this page of PyTorch. But basically, there's no real good reason for you to not use Torch.compile in your PyTorch. I kind of feel like you should be using it almost by default if you're not, unless you're debugging and you want your code to run really fast. And there's one line here in Torch Compile that I found that actually kind of like gets to why this is faster. Speedup mainly comes from reducing Python overhead and GPU read rights. So let me unpack that a little bit. Okay, here we are. Okay, so we went from 300 milliseconds. Okay, here we are. Okay, so we went from 300 milliseconds. We're now running at 129 milliseconds. So this is a 300 divided by 129, about 2.3x improvement from a single line of code in PyTorch. So quite incredible. So what is happening? What's happening under the hood? Well, when you pass the model to Torch Compile, what we have here in this nn module, this is really just the algorithmic description of what we'd like to happen in our network. And Torch Compile will analyze the entire thing, and it will look at what operations you like to use. And with the benefit of knowing exactly what's going to happen, it doesn't have to run in what's called the eager mode. It doesn't have to just kind of like go layer by layer, like the Python interpreter normally would start at the forward. And the Python interpreter will go, okay, let's do this operation. And then let's do that operation. And it kind of materializes all the operations as it goes through. So these calculations are dispatched and run in this order. And the Python interpreter and this code doesn't know what kind of operations are going to happen later. But Torch Compile sees your entire code at the same time, and it's able to know what operations you intend to run, and it will kind of optimize that process. The first thing it will do is it will take out the Python interpreter from the forward pass entirely. And it will kind of compile this entire neural net as a single object with no Python interpreter involved. So it knows exactly what's going to run. It will just run that. And it's all going to be running in efficient code. The second thing that happens is this read-write that they mentioned very briefly. So a good example of that, I think, is the GELU nonlinearity that we've been looking at. So here we use the nnGELU. Now this here is me basically just breaking up the nnGELU which you remember has this formula. So this here is the equivalent implementation to what's happening inside GELU. Algorithmically it's identical. Now by default if we just were using this instead of ending.gulu here, what would happen without TorchCompile? Well the Python interpreter would make its way here and then it would be okay well there's an input. Well let me first let me raise this input to the third power and it's going to dispatch a kernel that takes your input and raises to the third power and that kernel will run. And when this kernel runs, what ends up happening is this input is stored in the memory of the GPU. So here's a helpful example of the layout of what's happening, right? You have your CPU. This is in every single computer. There's a few cores in there, and you have your RAM, your memory. And the CPU can talk to the memory and this is all well known. But now we've added the GPU and the GPU is a slightly different architecture of course, they can communicate and it's different in that it's got a lot more cores than a CPU. All of those cores are individually a lot simpler too, but it also has memory, right? This high bandwidth memory. I'm sorry if I'm botching it. HBM. I don't even know what that stands for. I'm just realizing now. But this is the memory, and it's very equivalent to RAM, basically, in the computer. And what's happening is that input is living in the memory. And when you do input cubed, this has to travel to the GPU, to the course, and to all the caches and registers on the actual chip of this GPU. And it has to calculate all the elements to the third, and then it saves the result back to the memory. And it's this travel time that actually causes a lot of issues. So here, remember this memory bandwidth? We can communicate about two terabytes per second, which is a lot, but also we have to traverse this link and it's very slow. So here on the GPU we're on chip and everything is super fast within the chip, but going to the memory is extremely expensive, takes an extremely long amount of time. And so we load the input do the calculations and load back the output and this round trip takes a lot of time and now right after we do that we multiply by this constant so what happens then is we dispatch another kernel and then the result travels back all the elements get multiplied by a constant, and then the results travel back to the memory. And then we take the result and we add back input. And so this entire thing again travels to the GPU, adds the inputs, and gets written back. So we're making all these round trips from the memory to actually where the computation happens, because all the tensor cores and ALUs and everything like that is all stored on the chip and the GPU. So we're doing a ton of round trips and PyTorch without using Torch Compile doesn't know to optimize this because it doesn't know what kind of operations you're running later. You're just telling it, raise the power to the third, then do this, then do that. And it will just do that in that sequence. But Torch Compile sees your entire code. It will come here, and it will realize, wait, all of these are element-wide operations, and actually what I'm going to do is I'm going to do a single trip of input to the GPU. Then for every single element, I'm going to do all of these operations while that memory is on the GPU, or chunks of it, rather rather and then I'm going to write back a single time so we're not going to have these round trips and that's one example of what's called kernel fusion and is a major way in which everything is sped up so basically if you have your benefit of onset and you know exactly what you're going to compute you can optimize your round trips to the memory and you're not going to pay the memory bandwidth cost and that's fundamentally what makes some of these operations a lot faster and what they mean by read writes here so let me erase this because we are not using it and yeah we should be using torch compile and our code is now significantly faster and we're doing about 125 000 tokens tokens per second. But we still have a long way to go. Before we move on, I wanted to supplement the discussion a little bit with a few more figures. Because this is a complicated topic, but it's worth understanding on a high level what's happening here. And I could probably spend an entire video of like two hours on this, but just a preview of that basically. So this chip here, that is the GPU. This chip is where all the calculations happen mostly. But this chip also does have some memory in it. But most of the memory by far is here in the high bandwidth memory, HBM, and they're connected. But these are two separate chips basically. Now here, this is zoom-in of kind of this cartoon diagram of a GPU, and what we're seeing here is, number one, you see this HBM. I realize it's probably very small for you, but on the sides here, it says HBM, and so that's the links to the HBM. Now, the HBM is, again, off-chip. On the chip, there are a large number of these streaming multi-processors. Every one of these is an SM. There's 120 of them in total and this is where a lot of the calculations happen and this is a zoom-in of a single individual SM. It has these four quadrants and see for example tensor core. This is where a lot of the matrix multiply stuff happens but there's all these other units to do all different kinds of calculations for FP64, FP32, and for integers and so on. Now, so we have all this logic here to the calculations, but in addition to that, on the chip there is memory sprinkled throughout the chip. So L2 cache is some amount of memory that lives on the chip, and then on the SMs themselves, there's L1 cache. I realize it's probably very small for you, but this blue bar is L1. And there's also registers. And so there is memory stored here, but the way this memory is stored is very different from the way memory is stored in HBM. This is a very different implementation using just in terms of like what the silicon looks like it's a very different implementation. So here you would be using transistors and capacitors and here it's a very different implementation with SRAM and what that looks like. But long story short is there is memory inside the chip but it's not a lot of memory. That's the critical point. So this is an example diagram of a slightly different GPU, just like here, where it shows that, for example, typical numbers for CPU DRAM memory, which is this thing here, you might have one terabyte of disk, right? But it would be extremely expensive to access, especially for a GPU. You have to go through the CPU here. Now, next we have the HBM. So we have tens of gigabytes of HBM memory on a typical GPU here, but it's, as I mentioned, very expensive to access. And then on the chip itself, everything is extremely fast within the chip, but we only have a couple of 10 megabytes of memory collectively throughout the chip, but we only have a couple 10 megabytes of memory collectively throughout the chip. And so there's just not enough space because the memory is very expensive on the chip. And so there's not a lot of it, but it is lightning fast to access in relative terms. And so basically, whenever we have these kernels, the more accurate picture of what's happening here is that we take these inputs, which live by default on the global memory and now we need to perform some calculation so we start streaming the data from the global memory to the Chip we perform the calculations on the chip and then stream it back and store it back to the global memory Right and so if we are if we don't have torch compile We are streaming the data through the chip during the calculations and saving to the memory, and we're doing those round trips many, many times. But if it's Torch compiled, then we start streaming the memory as before, but then while we're on the chip, we have a chunk of the data that we're trying to process. So that chunk now lives on the chip. While it's on the chip, it's extremely fast to operate on. So if we have kernel fusion, we can do all the operations right there in an element-wise fashion, and those are very cheap. And then we do a single round trip back to the global memory. So operator fusion basically allows you to keep your chunk of data on the chip and do lots of calculations on it before you write it back. And that gives huge savings. And that's why Torch Compile ends up being a lot faster. Or that's one of the major reasons. So again, just a very brief intro to the memory hierarchy and roughly what Torch Compile does for you. Now, Torch Compile is amazing, but there are operations that Torch Compile will not find. And an amazing example of that is Flash flash attention to which we turn next. So flash attention comes from this paper from Stanford in 2022 and it's this incredible algorithm for performing attention and running it a lot faster. So flash attention will come here and we will take out these four lines and FlashAttention implements these four lines really really quickly. And how does it do that? Well FlashAttention is a kernel fusion operation. So you see here we have in this diagram they're showing PyTorch and you have these four operations. They're including dropout but we are not using dropout here. So we just have these four lines of code here. And instead of those, we are fusing them into a single fused kernel of flash attention. So it's a kernel fusion algorithm, but it's a kernel fusion that Torch Compile cannot find. And the reason that it cannot find it is that it requires an algorithmic rewrite of how attention is actually implemented here in this case. And what's remarkable about it is that flash attention actually, if you just count number of flops, flash attention does more flops than this attention here. But flash attention is actually significantly faster. In fact, they cite 7.6 times faster potentially. And that's because it is very mindful of the memory hierarchy, as I described it just now. And so it's very mindful about what's in high bandwidth memory, what's in the shared memory, and it is very careful with how it orchestrates the computation such that we have fewer reads and writes to the high bandwidth memory. And so even though we're doing more flops, the expensive part is their load and store into HBM, and that's what they avoid. And so in particular, they do not ever materialize this end-by-end attention matrix, this ATT here. Flash attention is designed such that this matrix never gets materialized at any point, and it never gets read or written to the HBM. And this is a very large matrix, right? So, because this is where all the queries and keys interact, and we're sort of getting for each head, for each batch element, we're getting a T by T matrix of attention, which is a million numbers, even for a single head at a single batch index. So basically, this is a ton of memory, and this has never materialized. And the way that this is achieved is that basically the fundamental algorithmic rewrite here relies on this online softmax trick, which was proposed previously, and I'll show you the paper in a bit. And the online softmax trick, coming from a previous paper, shows how you can incrementally evaluate a softmax without having to sort of realize all of the inputs to the softmax to do the normalization. And you do that by having these intermediate variables m and l, and there's an update to them that allows you to evaluate the softmax in an online manner. Now flash attention actually so recently flash attention 2 came out as well so I have that paper up here as well that has additional gains to how it calculates flash attention and the original paper that this is based on basically is this online normalizer calculation for softmax and remarkably it came out ofVIDIA and it came out of it like really early 2018. So this is four years before flash attention. And this paper says that we propose a way to compute the classical Softmax with fewer memory accesses and hypothesize that this reduction in memory accesses should improve Softmax performance on actual hardware. And so they are extremely correct in this hypothesis. But it's really fascinating to me that they're from NVIDIA and that they had this realization, but they didn't actually take it to the actual flash attention that had to come four years later from Stanford. So I don't fully understand how this happened historically. But they do basically propose this online update to the softmax right here and this is fundamentally what they reuse here to calculate the softmax in a streaming manner and then they realize that they can actually fuse all the other operations with the online softmax calculation into a single fused kernel flash attention and that's what we are about to use. So a great example I think of being aware of memory hierarchy, the fact that flops don't matter, the entire memory access pattern matters and that torch compile is amazing but there are many optimizations that are still available to us that potentially torch compile cannot find. Maybe one day it could but right now it seems like a lot to ask. So here's what we're going to do. We're going to use flashAttention. And the way to do that basically in PyTorch is we are going to comment out these four lines. And we're going to replace them with a single line. And here we are calling this compound operation in PyTorch called scaled.productAttention. in PyTorch called scaled.productAttention. And PyTorch will call FlashAttention when you use it in this way. I'm not actually 100% sure why Torch Compile doesn't realize that these four lines should just call FlashAttention in this exact way. We have to do it again for it, which in my opinion is a little bit odd, but here we are. So you have to use this compound up and let's wait for a few moments before torch compile gets around to it and then let's remember that we achieved 6.05661 I have it here that's the loss we are expecting to see and we took 130 milliseconds before this change. So we're expecting to see the exact same result by iteration 49, but we expect to see faster runtime because flash retention is just an algorithmic rewrite, and it's a faster kernel, but it doesn't actually change any of the computation, and we achieve 6.058 okay so they're basically identical up to a floating point fudge factor so it's the identical computation but it's significantly faster going from 130 to roughly 96 and so this is 96 divide 130-ish so this is maybe 27-ish percent improvement so really interesting and that is flash retention okay we are now getting to one of my favorite optimizations and it is simultaneously the dumbest and the most brilliant optimization and it's always a little bit surprising to me. Anyway, so basically I mentioned a few minutes ago that there are some numbers that are nice and some numbers that are ugly. So 64 is a beautiful, nice number. 128 is even nicer. 256 is beautiful. What makes these numbers beautiful is that there are many powers of two inside them. You can divide by two many times. And examples of ugly numbers are like 13 and 17 and something like that prime numbers numbers that are not even and so on and so pretty much you always want to use nice numbers in all of your code that deals with neural networks or CUDA because everything in CUDA works in sort of like powers of two and lots of kernels are written in terms of powers of two and there are lots of blocks of sizes 16 and 64 and so on so everything is written in those terms and you always have special case handling for all kinds of logic that when your inputs are not made of nice numbers so let's see what that looks like basically scan your code and look for ugly numbers is roughly the heuristic. So three times is kind of ugly. I'm not 100% sure maybe this can be improved, but this is ugly and not ideal. That's nice. 1,024 is very nice. That's a power of 2. 12 is a little bit suspicious. Not too many powers of 2. 768 is great. 50,000 to 57 is a really, really ugly number. First of all, it's odd. And there's not too many powers of 2 in there. So this is a very ugly number, and it's highly suspicious. And then when we scroll down all these numbers are nice and then here we have mostly nice numbers except for 25. So in this configuration of GPT-2-XL a number of heads is 25. That's a really ugly number. That's an odd number and actually this did cause a lot of headaches for us recently when we're trying to optimize some kernels to run this fast and required a bunch of special case handling. So basically, we have some ugly numbers, and some of them are easier to fix than others. And in particular, the vocab size being 50,000 to 57, that's a very ugly number, very suspicious, and we're going to fix it. Now, when you fix these things, one of the easy ways to do that is you basically increase the number until it's the nearest power of two that you like. So here's a much nicer number. It's 50,304. And why is that? Because 50,304 can be divided by 8 or by 16 or by 32, or by 32, 64, it can even be divided by 128 I think. Yeah so it's a very nice number. So what we're going to do here is this is the GPT config and you see that we initialize vocab size to 50,257. Let's override just that element to be 50,304. Okay so everything else stays the same. We're just increasing our vocabulary size. So we're adding, it's almost like we're adding fake tokens. So that book app size has powers of two inside it. Now actually what I'm doing here by the way is I'm increasing the amount of computation that our network will be doing. If you just count the flops on like, do the math of how many flops we're doing, we're going to be doing more flops and we still have to think through whether this doesn't break anything. But if I just run this, let's see what we get. Currently this ran in maybe 96.5 milliseconds per step. I'm just kind of like eyeballing it and let's see what kind of a result we're going to get. While this is compiling, let's think through whether our code actually works okay when we increase the vocab size like this. Let's look at where vocab size is actually used. So we swing up to the init and we see that it's used inside the embedding table, of course, so all the way at the bottom of the transformer and it's used at the embedding table, of course, so all the way at the bottom of the transformer. And it's used at the classifier layer, all the way at the top of the transformer, so in two places. And let's take a look. And we're running at 93. So 93 milliseconds instead of 96.5. So we are seeing a roughly 4% improvement here by doing more calculations. And the reason for this is we've made an ugly number into a nice number. I'm going to come into the explanation for that a little bit again, but for now let's just convince ourselves that we're not breaking anything when we do this. So first of all, we've made the WTE, the embedding table for the tokens, we've made it larger. It's almost like we introduced more tokens at the bottom and these tokens are never used because the GPT tokenizer only has tokens up to 50,256 and so we'll never index into the rows that we've added. So we're wasting a little bit of space here by creating memory that's never going to be accessed, never going to be used, etc. Now that's not fully correct because this WTE weight ends up being shared and ends up being used in the classifier here at the end. So what is that doing to the classifier right here? Well, what that's doing is we're predicting additional dimensions of the classifier now. And we're predicting probabilities for tokens that will, of course, never be present in the training set. And so, therefore, the network has to learn that these probabilities have to be driven to zero. And so the logits that the network produces have to drive those dimensions of the output to negative infinity. But that's no different from all the other tokens that are already in our dataset, or rather that are not in our dataset. So Shakespeare only probably uses, let's say, a thousand tokens out of 50,000 to 57 tokens. So most of the tokens are already being driven to zero probability by the optimization. We've just introduced a few more tokens now that in a similar manner will never be used and have to be driven to zero in probability. So functionally though, nothing breaks. We're using a bit more extra memory, but otherwise this is a harmless operation as far as I can tell. And we're adding calculation, but it's running faster. And it's running faster because, as I mentioned in CUDA, so many kernels use block tiles. And these block tiles are usually nice numbers so powers of two so calculations are done in like chunks of 64 or chunks of 32 and when your when your desired calculation doesn't neatly fit into those block tiles there are all kinds of boundary kernels that can kick in to like do the last part so basically in a lot of kernels they will truncate up your input and they will do the nice part first and then they have a whole second second phase where they come back to anything that like uh remains uh and then they process the remaining part and the kernels for that could be very inefficient and so you're basically um spinning up all this extra compute and it's extremely inefficient so you might as well pad your inputs and make it fit nicely and usually that empirically ends up actually running faster so this is another example of a four percent improvement that we've added and this is something that also torch compile did not find for us you would hope that torch compile at some point could figure an optimization like this out but for for now, this is it. And I also have to point out that we're using PyTorch nightly, so that's why we're only seeing 4%. If you're using PyTorch 2.3.1 or earlier, you would actually see something like 30% improvement just from this change, from changing it from 50,000 to 57,000 to 53,004. So again, one of my favorite examples also of having to understand the under the hood and how it all works and to know what kinds of things to tinker with to push the performance of your code. Okay, so at this point, we have improved the performance by about 11x, right? Because we started at about 1000 milliseconds per step, and we're now down to like 93 milliseconds. So that's quite good and we're doing a much better job of utilizing our GPU resources. So I'm going to now turn to more algorithmic changes and improvements to the actual optimization itself. And what we would like to do is we'd like to follow the hyperparameters that are mentioned in the GPT-2 or GPT-3 paper. Now sadly GPT-2 doesn't actually say too much. It's very nice of them that they released the model weights and the code but the paper itself is extremely vague as to the optimization details. The code itself that they released as well, the code we've been looking at, this is just the inference code so there's no training code here and very few hyperparameters so this doesn't also tell us too much. So for that, we have to turn to the GPT-3 paper. And in the appendix of the GPT-3 paper, they have a lot more hyperparameters here for us to use. And the GPT-3 paper in general is a lot more detailed as to all the small details that go into the model training, as to all the small details that go into the model training, but GPT-3 models were never released. So GPT-2, we have the weights, but no details, and GPT-3, we have lots of details, but no weights. But roughly speaking, GPT-2 and GPT-3 architectures are very, very similar, and basically there are very few changes. The context length was expanded from 1024 to 2048, and that's kind of like the major change. And some of the hyperparameters around the transformer have changed. But otherwise, they're pretty much the same model. It's just that GPT-3 was trained for a lot longer on a bigger dataset and has a lot more thorough evaluations. And the GPT-3 model is 175 billion instead of 1.6 billion in the GPT-2. So long story short, we're going to go to GPT-3 paper to follow along some of the hyperparameters. So to train all the versions of GPT-3, we use Atom with beta 1, beta 2 of 0.9 and 0.95. So let's swing over here and make sure that the betas parameter, which you can see here defaults to 0.9 and 0.999 is actually set to 0.9 and 0.95 and then the epsilon parameter you can see is the default is 1 negative 8 and this is also 1 negative 8 let's just put it in so that we're explicit now next up they say we clip the grad global norm of the gradient at 1.0. So what this is referring to is that once we calculate the gradients right after loss.backward we basically have the gradients at all the parameter tensors and what people like to do is basically clip them to have some kind of a maximum norm. So in PyTorch this is fairly easy to do. It's one line of code here that we have to insert right after we calculate the gradients. And what this utility function is doing is it's calculating the global norm of the parameters. So every single gradient on all the parameters you square it and you add it all up, and you take a big square root of that. And that's the norm of the parameter vector, basically. It's the length of it, if you'd like to look at it that way. And we are basically making sure that its length is no more than 1.0, and we're going to clip it. And the reason that people like to use this is that sometimes you can get unlucky during the optimization. Maybe it's a bad data batch or something like that. And if you get very unlucky in the batch, you might get really high loss. And really high loss could lead to a really high gradient. And this could basically shock your model and shock the optimization. So people like to use a gradient norm clipping to prevent the model from basically getting too big of shocks in terms of the gradient magnitude and the upper bounded in this way. It's a bit of a hacky solution, it's about like a patch on top of like deeper issues but people still do it fairly frequently. Now the clip grad norm returns the norm of the gradient which I like to always visualize because it is useful information and sometimes you can look at the norm of the gradient and if it's well behaved things are good. If it's climbing things are bad and they're destabilizing during training sometimes you could get a spike in the norm and that means there's some kind of an issue or instability so the norm here will be a norm and let's do a 0.4f or something like that and I believe this is just a float and so we should be able to print that so that's global gradient clipping. Now they go into the details of the learning rate scheduler. So they don't just use a fixed learning rate like we do here for 3E-4, but there's actually basically a cosine decay learning rate schedule. It's got a warm-up, and it's got a cosine decay to 10% over some horizon. warm-up and it's got a cosine decay to 10% over some horizon and so we're gonna implement this in a second I just like to see the norm printed here okay there we go so what happened here is the norm is actually really high in the beginning 30 or so and you see that as we continue training it kind of like stabilizes at values below one. And this is not that crazy uncommon for the norm to be high in the very first few stages. Basically what's happening here is the model is completely random. And so there's a ton of learning happening very early in the network, but that learning is kind of like, you know, it's mostly learning the biases of the output tokens. And so it's a bit of an unstable time, but the network usually stabilizes in a very few iterations. So this looks relatively reasonable to me, except usually I would expect this looks a little bit funky that we go from 28 to 6 to 2 and then to 10. It's not completely insane, but it's just kind of a little bit funky. Okay, so let's now get to the learning rate scheduler. So the learning rate schedule that's used here in GPT-3 is what's called a cosine decay learning schedule with warm-up and the way this looks is that the learning rate is basically starts right at around zero, linearly ramps up over some amount of time and then comes down with this cosine sort of form and comes down to some kind of a minimum learning rate that's up to you. So here the minimum learning rate is zero, but here in the paper, they said that they use cosine decay for learning rate down to 10% of its value over the first 260 billion tokens, and then training continues 10% after, and there's a linear warm-up over the first 375 million tokens. So that's about the learning rate. So let's now implement this. So I already implemented it here and the way this works is, let me scroll down first here, I changed our training loop a little bit. So this was a 4 i in max steps. I just change it to step now so that we have the notion of a step is a single optimization step in the for loop. And then here I get the LR for this step of the optimization using a new function I call get LR. And then in PyTorch to set the learning rate, I think this is the way to set the learning rate. It's a little bit gnarly because you have to basically there's a notion of different parameter groups that could exist in the optimizer. And so you actually have to iterate over them, even though we currently have a single param group only. And you have to set the LR in this for loop kind of style is my impression right now. So we have this look of a LR, we set the learning rate, and then on the bottom, I'm also printing it. So that's all the changes I made to this loop. And then, of course, the get LR is my scheduler. Now, it's worth pointing out that PyTorch actually has learning rate schedulers, and you can use them, and I believe there's a cosine learning rate schedule in PyTorch. I just don't really love using that code because, honestly, it's like five lines of code and I fully understand what's happening inside these lines so I don't love to use abstractions where they're kind of inscrutable and then I don't know what they're doing so personal style so the max learning rate here is let's say three negative four but we're going to see that in GPT-3 here, they have a table of what the maximum learning rate is for every model size. So for this one, basically 12 layer 768 GPT-3. So the GPT-3 small is roughly like a GPT-2 124M. We see that here they use a learning rate of 6e-4 so we could actually go higher in fact we may want to try to follow that and just set the max lr here at six then the that's the maximum learning rate the min learning rate is uh 10 of that per description in the paper some number of steps that we're going to warm up over and then the maximum steps of the optimization which i now use also in the for loop down here and then you can go over this code if you like it's not terribly inside floor interesting I'm just modulating based on the iteration number of which learning rate there should be so this is the warm-up region this is the region after the optimization and then this is the region sort of in between, and this is where I calculate the cosine learning rate schedule. And you can step through this in detail if you'd like, but this is basically implementing this curve. And I ran this already, and this is what that looks like. So when we now run, we start at some very low number. Now, note that we don't start exactly at zero because that would be not useful to update with a learning rate of zero. That's why there's an it plus one, so that on the zeroth iteration, we are not using exactly zero. We're using something very, very low. Then we linearly warm up to maximum learning rate, which in this case was 3e-4 when I ran it, but now would be 6e-4. And then it starts to decay all the way down to 3e-5, which was at the time 10% of the original learning rate. Now, one thing we are not following exactly is that they mentioned that... Let me see if I can find it again. We're not exactly following what they did because they mentioned that their training horizon is 300 billion tokens and they come down to 10% of the initial learning rate at 260 billion and then they train after 260 with 10%. So basically their decay time is less than the max steps time whereas for us they're exactly equal. So it's not exactly faithful, but this is okay for us and for our purposes right now. And we're just going to use this ourselves. I don't think it makes too big of a difference, honestly. I should point out that what learning rate schedule you use is totally up to you. There's many different types. Cosine learning rate has been popularized a lot by GPT-2 and GPT-3 but people have come up with all kinds of other learning rate schedules and this is kind of like an active area of research as to which one is the most effective at training these networks. Okay next up the paper talks about the gradual batch size increase. So there's a ramp on the batch size that is linear, and you start with very small batch size, and you ramp up to a big batch size over time. We're going to actually skip this, and we're not going to work with it. And the reason I don't love to use it is that it complicates a lot of the arithmetic, because you are changing the number of tokens that you're processing at every single step of the optimization. And I like to keep that math very, very simple. Also, my understanding is that this is not like a major improvement. And also, my understanding is that this is not like an algorithmic optimization improvement. It's more of a systems and speed improvement. And roughly speaking, this is because in the early stages of the optimization, again, the model is in a very atypical setting. And mostly what you're learning is that you're mostly learning to ignore the tokens that don't come up in your training set very often. You're learning very simple biases and that kind of a thing. And so every single example that you put through your network is basically just telling you, use these tokens and don't use these tokens. And so the gradients from every single example are actually extremely highly correlated. They all look roughly the same in the original parts of the optimization because they're all just telling you that these tokens don't appear and these tokens do appear. And so because the gradients are all very similar and they're highly correlated, then why are you doing batch sizes of like millions and these tokens do appear. And so because the gradients are all very similar and they're highly correlated, then why are you doing batch sizes of like millions when if you do a batch size of 32K, you're basically getting the exact same gradient early on in the training. And then later in the optimization, once you've learned all the simple stuff, that's where the actual work starts. And that's where the gradients become more de-correlated per examples. And that's where the actual work starts, and that's where the gradients become more de-correlated per examples, and that's where they actually offer you sort of statistical power in some sense. So we're going to skip this just because it kind of complicates things, and we're going to go to data are sampled without replacement during training, so until an epoch boundary is reached. So without replacement means that they're not sampling from some fixed pool and then take a sequence, train on it, but then also return the sequence to the pool. They are exhausting a pool. So when they draw a sequence, it's gone until the next epoch of training. So we're already doing that because our data loader iterates over chunks of data. So there's no replacement. They don't become eligible to be drawn again until the next epoch. So we're basically already doing that. All models use a weight decay of 0.1 to provide a small amount of regularization. So let's implement the weight decay and you see here that I've already kind of made the changes and in particular instead of creating the optimizer right here, I'm creating a new configure optimizers function inside the model and I'm passing in some of the hyperparameters instead. So let's look at the configure optimizers which is supposed to return the optimizer object. Okay so it looks complicated but it's actually really simple and it's just we're just being very careful and there's a few settings here to go through. The most important thing with respect to this line is that you see there's a weight decay parameter here and I'm passing that into well I'm passing that into something called optim groups that eventually ends up going into the addmw optimizer and the weight decay that's by default used in addmw here is 0.01 so it's it's 10 times lower than what's used in GPT-3 paper here so the weight decay basically ends up making its way into the addmw3 optimizer groups. Now what else is going on here in this function? So the two things that are happening here that are important is that I'm splitting up the parameters into those that should be weight decayed and those that should not be weight decayed. So in particular, it is common to not weight decay biases and any other sort of one-dimensional tensors. So the one-dimensional tensors. So the one-dimensional tensors are in the no decay params and these are also things like layer norm, scales and biases. It doesn't really make sense to weight decay those. You mostly want to weight decay the weights that participate in matrix multiplications and you want to potentially weight decay the embeddings. And we've covered in previous video why it makes sense to decay the weights because you can sort of think of it as a regularization because when you're pulling down all the weights, you're forcing the optimization to use more of the weights and you're not allowing any one of the weights individually to be way too large. You're forcing the network to kind of like distribute the work across more channels because there's sort of like a pool of gravity on the weights themselves. So that's why we are separating it in those ways here. We're only decaying the embeddings and the matmul participating weights. We're printing the number of parameters that we're decaying and not. Most of the parameters will be decayed. And then one more thing that we're doing here is I'm doing another optimization here. And previous Adam W did not have this option, but later parts of PyTorch introduced it. And that's why I'm guarding it with an inspect.signature, which is basically checking if this fused spec.signature which is basically checking if this fused quark is present inside atom w and then if it is present I'm gonna end up using it and passing it in here because some earlier versions do not have fused equals so here's atom w fused equals it did not used to exist and it was added later and there's some docs here for what's happening and basically they say that by default they do not use fused because it is relatively new and we want to give it sufficient big time so by default they don't use fused but fused is a lot faster when it is available and when you're running on CUDA and what that does is instead of iterating in a for loop over all the parameter tensors and updating them, that would launch a lot of kernels, right? And so fused just means that all those kernels are fused into a single kernel. You get rid of a lot of overhead and you a single time on all the parameters call a kernel that updates them. And so it's just basically a kernel fusion for the AtomW update instead of iterating over all the tensors. So that's the ConfigureOptimizers function that I like to use. And we can rerun. And we're not going to see any major differences from what we saw before, but we are going to see some prints coming from here. So let's just take a look at what they look like. So we see that number of decay tensors is 50, and it's most of the parameters. And number of non-decay tensors is 98, and these are the biases and the layer norm parameters mostly. And there's only 100,000 of those. So most of it is decayed. And then we are using the fused implementation of AtomW, which will be a lot faster. So if you have it available, I would advise you to use it. I'm not actually 100% sure why they don't default to it. It seems fairly benign and harmless. And also because we are using the fused implementation, I think this is why we have dropped. Notice that the running time used to be 93 milliseconds per step, and we're now down to 90 milliseconds per step because of using the fused Adam W optimizer. So in a single commit here, we are introducing fused Adam, getting improvements on the time, and we're adding or changing the weight decay. But we're only weight decaying the two-dimensional parameters, the embeddings, and the matrices that participate in the linear. So that is this, and we can take this out, and yeah, that is it for this line. One more quick note before we continue here. I just want to point out that the relationship between weight decay, learning rate, batch size, the atom parameters, beta 1, beta 2, the epsilon, and so on, these are very complicated mathematical relationships in the optimization literature, These are very complicated mathematical relationships in the optimization literature. And for the most part, in this video, I'm just trying to copy-paste the settings that OpenAI used. But this is a complicated topic, quite deep. And yeah, in this video, I just want to copy the parameters because it's a whole different video to really talk about that in detail and give it a proper justice instead of just high level intuitions. Now the next thing that I want to move on to is that this paragraph here by the way we're going to turn back around to when we improve our data loader. For now I want to swing back around to this table where you will notice that for different models we of course have different hyperparameters for the transformer that dictate the size of the transformer network we also have a different learning rate so we've seen the pattern that the bigger networks are trained with slightly lower learning rates and we also see this batch size where in the small networks they use a smaller batch size and in the bigger networks they use a smaller batch size and in the bigger networks they use a bigger batch size. Now, the problem for us is we can't just use 0.5 million batch size because if I just try to come in here and I try to set this B, where is my B? B equals, where do I call the deal? Okay, b equals 16. If I try to set well, we have to be careful. It's not 0.5 million because this is the batch size in the number of tokens. Every single one of our rows is 1,024 tokens. So 0.5 e6 1 million divide 1,024. This would need about a 488 batch size. So the problem is I can't come in here and set this to 488 because my GPU would explode. This would not fit for sure. And so but we still want to use this batch size because again as I mentioned the batch size is correlated with all the other optimization hyperparameters and the learning rates and so on. So we want to have a faithful representation of all the hyperparameters and therefore we need to use a batch size of 0.5 million roughly. But the question is how do we use 0.5 million if we only have a small GPU? Well for that we need to use what's called a gradient accumulation. So we're going to turn to that next and it allows us to simulate in a serial way any arbitrary batch size that we set. And so we can do a batch size of 0.5 million we just have to run longer and we have to process multiple sequences and basically add up all the gradients from them to simulate a batch size of 0.5 million. So let's turn to that next. Okay so I started the implementation right here just by adding these lines of code and basically what I did is first I set the total batch size that we desire. So this is exactly 0.5 million and I used a nice number a power of 2 because 2 to the 19 is 524288. So it's roughly 0.5 million. It's a nice number. Now our micro batch size, as we call it now, is 16. So this is going to be, we still have B by T indices that go into the transformer and do forward backward, but we're not going to do an update, right? We're going to do many forward backwards. We're going to, and those gradients are all going to plus equals on the parameter gradients. They're all going to add up. So we're going to do forward, backward, grad, accum, steps, number of times, and then we're going to do a single update once all that is accumulated. So in particular, our micro-batch size is just now controlling how many tokens, how many rows we're processing in a single go over forward, backward. So here we are doing 16 times 124. We're doing 16,384 tokens per forward backward and we are supposed to be doing 2 to the 19. Oops what am I doing? 2 to the 19 in total so the grad accum will be 32. 19 in total so the gradacum will be 32. So therefore gradacum here will work out to 32 and we have to do 32 forward backward and then a single update. Now we see that we have about 100 milliseconds for a single forward backward so doing 32 of them will make every step roughly three seconds in just napkin math so that's graticum steps but now we actually have to implement that so we're going to swing over to our training loop because now this part here and this part here the forward and the backward we have to now repeat this 32 times before we do everything else that follows. So let's see how we can implement that. So let's come over here. And actually, we do have to load a new batch every single time. So let me move that over here. And now this is where we have the inner loop. So for micro step in range, gradicum steps, we do this. And remember that lost at backward always deposits gradients. So we're doing doing inside lost at backward there's always a plus equals on the gradients so in every single lost at backward gradients will add up on the gradient testers so we lost at backward and then we get all the gradients over there and then we normalize and everything else should just follow. So we're very close but actually there's like subtle and deep issue here and this is actually incorrect. So I invite you to think about why this is not yet sufficient and let me fix it then. Okay so I brought back the Jupyter notebook so we can think about this carefully in a simple toy setting and see what's happening. So let's create a very simple neural net that takes a 16 vector of 16 numbers and returns a single number. And then here I'm creating some random examples x and some targets y. And then we are using the mean squared loss here to calculate the loss. So basically what this is, is four individual examples. And we're just doing simple regression with the mean squared loss over those four examples. Now, when we calculate the loss and we loss that backward and look at the gradient, this is the gradient that we achieve. Now, the loss objective here, notice that in MSE loss the default for the loss function is reduction is mean. So we're calculating the average mean loss, the mean loss here over the four examples. So this is the exact loss objective and this is the average, the 1 over 4, because there are four independent examples here. And then we have the four examples and their mean squared error, the squared error, and then this makes it the mean squared error. So therefore, we calculate the squared error, and then we normalize it to make it the mean over the examples. And there's four examples here. So now when we come to the gradient accumulation version of it, this here is the gradient accumulation version of it where we have grad accrued steps of four and I reset the gradient with grad accrued steps of four and now I'm evaluating all the examples individually instead and calling loss.backward on them many times and then we're looking at the gradient that we achieved from that. So basically now we forward our function, calculate the exact same loss, do a backward, and we do that four times. And when we look at the gradient, you'll notice that the gradients don't match. So here we did a single batch of four, and here we did four gradient accumulation steps of batch size one and the gradients are not the same and basically the reason that they're not the same is exactly because this mean squared error gets lost this one quarter in this loss gets lost because what happens here is the loss objective for every one of the loops is just a mean squared error which in this case because there's only a single example, is just this term here. So that was the loss in the zeroth iteration, same in the first, third, and so on. And then when you do the loss.backward, we're accumulating gradients. And what happens is that accumulation in the gradient is basically equivalent to doing a sum in the loss. So our loss actually here is this without the factor of 1 quarter outside of it. So we're missing the normalizer and therefore our gradients are off. And so the way to fix this or one of them is basically we can actually come here and we can say loss equals loss divide four and what happens now is that we're introducing we're scaling our loss we're introducing a one-quarter in front of all of these places so all the individual losses are now scaled by one quarter and then when we backward all of these accumulate with the sum but now there's a one quarter inside every one of these accumulate with the sum but now there's a one quarter inside every one of these components and now our losses will be equivalent. So when I run this you see that the gradients are now identical. So long story short with this simple example when you step through it you can see that basically the reason that this is not correct is because in the same way as here in the MSC loss the loss that we're calculating here in the model is using a reduction of mean as well so where is the loss? f dot cross entropy and by default the reduction here in cross entropy is also I don't know why they don't show it but it's the mean the mean loss at all the b by t elements, right? So there's a reduction by mean in there. And if we're just doing this gradient accumulation here, we're missing that. And so the way to fix this is to simply compensate for the number of gradient accumulation steps. And we can in the same way divide this loss. for the number of gradient accumulation steps, and we can in the same way divide this loss. So in particular here, the number of steps that we're doing is loss equals loss divided gradient accumulation steps. So even a copilot gets the modification. But in the same way exactly, we are scaling down the loss so that when we do loss.backward, which basically corresponds to a sum in the objective, we are summing up the already normalized loss.backward which basically corresponds to a sum in the objective we are summing up the already normalized loss and therefore when we sum up the losses divided by Gratacom steps we are recovering the additional normalizer and so now these two will be now this will be equivalent to the original sort of optimization because the gradient will come out the same okay so I had to do a few more touch-ups and I launched the optimization here so in particular sort of optimization because the gradient will come out the same. Okay so I had to do a few more touch-ups and I launched the optimization here. So in particular one thing we want to do because we want to print things nicely is well first of all we need to create like an accumulator over the loss. We can't just print the loss because we'd be printing only the final loss at the final micro step. So instead we have loss of kum which I initialize at zero and then I accumulate the loss into it and I'm using detach so that I'm detaching the tensor from the graph and I'm just trying to keep track of the values. So I'm making these leaf nodes when I add them. So that's loss of kum and then we're printing that here instead of loss and then in addition to that I had to account for the the grad accum steps inside the tokens processed because now the tokens processed per step is B times T times gradient accumulation. So long story short, here we have the optimization. It looks reasonable, right? We're starting at a good spot. We calculated the grad accum steps to be 32. And we're getting about three seconds here right and so this looks pretty good now if you'd like to verify that your optimization and the implementation here is correct and you're working on the side well now because we have the total batch size and the gradient accumulation steps our setting of B is purely a performance optimization kind of setting. So if you have a big GPU, you can actually increase this to 32, and you'll probably go a bit faster. If you have a very small GPU, you can try 8 or 4. But in any case, you should be getting the exact same optimization and the same answers up to like a floating point error because the gradient accumulation kicks in and can handle everything serially as necessary. So that's it for gradient accumulation I think. Okay so now is the time to bring out the heavy weapons. You've noticed that so far we've only been using a single GPU for training but actually I am paying for eight GPUs here, and so we should be putting all of them to work. And in particular, they are going to collaborate and optimize over tokens at the same time and communicate so that they're all kind of collaborating on the optimization. For this, we are going to be using the distributed data parallel from PyTorch. There's also a legacy data parallel, which I recommend you not use, and that's kind of like, you know, legacy. Distribute data parallel works in a very simple way. We have eight GPUs, so we're going to launch eight processes, and each process is going to be assigned a GPU, and for each process, the training loop and everything we've worked on so far is going to look pretty much the same. Each GPU, as far as it's concerned, is just working on exactly what we've built so far. But now, secretly, there's eight of them and they're all going to be processing slightly different parts of the data. And we're going to add one more part where once they all calculate their gradients, there's one more part where we do an average of those gradients. And so that's how they're going to be collaborating on the computational workload here. So to use all eight of them, we're not going to be launching our script anymore with just PyTorch traingpt2.py. We're going to be running it with a special command called torch run in PyTorch. We'll see that in a bit. And torch run, when it runs our Python script, will actually make sure to run eight of them in parallel. And it creates these environmental variables where each of these processes can look up which, basically, which one of the processes it is. So, for example, Torch Run will set rank, local rank, and world size, environmental variables. And so this is a bad way to detect whether DDP is running. So if we're using Torch Run, if DDP is running, then we have to make sure that CUDA is available, because I don't know that you can run this on CPU anymore, or that that makes sense to do. This is some setup code here. The important part is that there's a world size, which for us will be eight. That's the total number of processes running. There's a rank, which is each process will basically run the exact same code at the exact same time roughly, but the only difference between these processes is that they all have a different DDP rank so the GPU 0 will have DDP rank of 0 GPU 1 will have rank of 1 etc so otherwise they're all running the exact same script it's just that DDP rank will be a slightly different integer and that is the way for us to coordinate that they don't for example run on the same data we want to we want them to run on different parts of the data and so on now local rank is something that is only used in a multi-node setting we only have a single node with a gpus and so local rank is the rank of the gpu on a single node so from zero to seven as an example but for us we're mostly going to be running on a single box so the things we care about our rank and world size this is 8 and this will be whatever it is depending on the GPU that that this particular instantiation of the script runs on now here we make sure that according to the local rank we are setting the device to be CUDA colon and colon indicates which GPU to use if there are more than one GPUs. So depending on the local rank of this process, it's going to use just the appropriate GPU. So there's no collisions on which GPU is being used by which process. And finally, there's a boolean variable that I like to create, which is the ddprank == 0. So the master process is arbitrarily process number 0, and it does a lot of the printing, logging, checkpointing, etc. And the other processes are thought of mostly as compute processes that are assisting. And so master process 0 will have some additional work to do. All the other processes will will mostly just be doing forward backwards and if we're not using DDP and none of these variables are set we revert back to single GPU training so that means that we only have rank zero the world size is just one and we are the master process and we try to auto detect the device and this is world as normal. So, so far all we've done is we've initialized DDP, and in the case where we're running with Torch Run, which we'll see in a bit, there's going to be eight copies running in parallel. Each one of them will have a different rank. And now we have to make sure that everything happens correctly afterwards. So the tricky thing with running multiple processes is you always have to imagine that there's gonna be eight processes running in parallel. So as you read the code now you have to imagine there's eight Python interpreters running down these lines of code and the only difference between them is that they have a different DDP rank. So they all come here, they all pick the exact same seed, they all make all of these calculations completely unaware of the other copies running, roughly speaking, right? So they all make the exact same calculations, and now we have to adjust these calculations to take into account that there's actually like a certain world size and certain ranks. So in particular these micro batches and sequence lengths these are all just per GPU right so now there's gonna be num processes of them running in parallel so we have to adjust this right because the Graticum steps now is gonna be total batch size divided by V times T times DDP world size because each process will do B times T and there's this many of them and so in addition to that we want to make sure that this fits nicely into total batch size which for us it will because 16 times 1.4 times 8 GPUs is 131k. And so 5, 2, 4, 2, 8, 8. This means that our Gratacum will be 4 with the current settings. So there's going to be 16 times 124 process in each GPU. And then there's 8 GPUs. So we're going to be doing 131,000 tokens in a single forward-backward on the eight GPUs. So we want to make sure that this fits nicely so that we can derive a nice gradient accumulation steps. And, uh, yeah, let's just adjust the comments here. Times TDP, world size. Okay. So each GPU calculates this. Now this is where we started to get right into issues, right? So each process is going to come by a print, and they're all going to print. So we're going to have eight copies of these prints. So one way to deal with this is exactly this master process variable that we have. So if master process, then guard this. And that's just so that we just print this a single time because otherwise all the processes would have computed the exact same variables and there's no need to print this eight times before getting into the data loader and we're gonna have to refactor it obviously maybe at this point is we should do some prints and just take it out for a spin and exit at this point so import sys and sys.exit and print imgpu ddp rank and print bye gpt rank and print pi. So now let's try to run this and just see how this works. So let's take it for a spin just so we see what it looks like. So normally we used to launch python train gpt2.pi like this. Now we're going to run with torch run and this is what it looks like. So torch run standalone number of processes for example is eight for us because we have eight GPUs, and then train gpt to that pi. So this is what the command would look like. And Torch Run, again, we'll run eight of these. So let's just see what happens. So first, it gets a little busy. So there's a lot going on here. So first of all, there's some warnings from distributed, and I don't actually know that these mean anything. I think this is just like the code is setting up and the processes are coming online and we're seeing some preliminary failure to collect while the processes come up I'm not 100% sure about that but we start to then get into actual prints so all the processes went down and then the first print actually comes from process 5 just by chance. And then it printed, so process 5 basically got here first. It said unprocess on GPU 5, bye. And then these prints come from the master process. So process 5 just finished first for whatever reason it just depends on how the operating system scheduled the processes to run then gpu0 ended then gpu3 and 2 and then probably process 5 or something like that has exited and DDP really doesn't like that because we didn't properly dispose of the multi GPUs setting. And so process group has not been destroyed before we destruct. So it really doesn't like that. And in an actual application, we would wanna call destroy process group so that we clean up DDP properly. And so it doesn't like that too much. And then the last of the GPUs finish and that's it. So basically we can't guarantee when these processes are running, it's totally arbitrary but they are running in parallel we don't want that to be printing and next up let's erase this. Next up we want to make sure that when we create data loader light we need to now make it aware of this multi-process loader light we need to now make it aware of this a multi-process setting because we don't want all the processes to be loading the exact same data we want every process to get its own chunk of data so that they're all working on different parts of the data set of course so let's adjust that so one particularly simple and a naive way to do this is we have to make sure that we pass in the rank and the size to the data loader. And then when we come up here we see that we now take rank and processes and we save them. Now the current position will not be zero because what we want is we want to stride out all the processes. So one way to do this is we basically take self.b times self.t and then multiply it by the process rank. So process rank 0 will start at 0 but process rank 1 now starts at b times t. Process rank 2 is starts at 2 times b times t etc. So that is the initialization. Now we still they still do this identically but now when we advance we don't advance by b times t, we advance by b times t times number of processes, right? So basically, the total number of tokens that we're consuming is b times t times number of processes, and they all go off to a different rank, and the position has to advance by the entire chunk and then here at b times t times self.numProcesses plus one would be to exceed number of tokens and then we're going to loop and when we loop we want to of course loop in the exact same way so we sort of like reset back so this is the simplest change that I can find for kind of a very simple distributed data loader light and you can notice that if process rank is zero and then processes is one then the whole thing will be identical to what we had before but now we can have actually multiple processes running and this should work fine. So let's do the data loader. Okay so next up once they've all initialized the data loader they come here and they all create a GPT model. So we create eight GPT models on eight processes but because the seeds are fixed here they all create the same identical model. They all move it to the device of their rank and they all compile the model. Because the models are identical, there are eight identical compilations happening in parallel, but that's okay. Now, none of this changes because that is on a per-step basis, and we're currently working within-step because all the changes we're making are within-step changes. Now, the important thing here is when we construct the model model we actually have a bit of work to do here. GetLogits is deprecated so createModel. We need to actually wrap the model into the distributed data parallel container. So this is how we wrap the model into the DDP container and these are the docs for DDP and they're quite extensive and there's a lot of caveats and a lot of things to be careful with because everything complexifies times 10 when multiple processes are involved. But roughly speaking, this device IDs I believe has to be passed in. Now, unfortunately, the docs for what device IDs is is extremely unclear. for what device IDs is is extremely unclear. So when you actually like come here, this comment for what device IDs is is roughly nonsensical, but I'm pretty sure it's supposed to be the DDP local rank. So not the DDP rank, the local rank. So this is what you pass in here. This wraps the model. And in particular, what DDP does for you is in a forward pass, it actually behaves identically. So my understanding of it is nothing should be changed in the forward pass. But in the backward pass, as you are doing the backward pass, in the simplest setting, once the backward pass is over on each independent GPU, each independent GPU has the gradient for all the parameters. And what DDP does for you is once the backward pass is over, it will call what's called allReduce. And it basically does an average across all the ranks of their gradients. And then it will deposit that average on every single rank. So every single rank will end up with the average on it and so basically that's the communication it just synchronizes and averages the gradients and that's what DDP offers you now DDP actually is a little bit more involved in that because as you are doing the backward pass through the layers of the transformer it actually can dispatch communications for the gradient while the backward pass is still happening. So there's overlap of the communication of the gradients and the synchronization of them and the backward pass. And this is just more efficient to do it that way. So that's what DDP does for you. Forward is unchanged and backward is mostly unchanged. And we're tacking on this average as we'll see in a bit. Okay so now let's go to the optimization. Nothing here changes. Let's go to the optimization here, the inner loop, and think through the synchronization of these gradients in the DEP. So basically by default what happens as I mentioned is when you do lost at backward here it will do the backward pass and then it will synchronize the gradients. The problem here is because of the gradient accumulation steps loop here, we don't actually want to do the synchronization after every single loss.backward, because we are just depositing gradients, and we're doing that serially, and we just want them adding up, and we don't want to synchronize every single time. That would be extremely wasteful. So basically, we want to add them up. And then on the very last, it's only on the very last step, when microstep becomes gratacrum steps minus one, only at that last step do we want to actually do the all reduce to average up the gradients. So to do that, we come here. And the official sanctioned way by the way is to do this no sync context manager. So PyTorch says this is a context manager to disable gradient synchronization across DDP processes. So within this context gradients will be accumulated and basically when you do no sync there will be no communication. So they are telling us to do with ddp no sync uh do the gradient accumulation accumulate grads and then they are asking us to do ddp again with another input and dot backward and i just really don't love this i just really don't like it the fact that you have to copy paste your code here and use a context manager and this is just super ugly so when i went to the source code here you can see that when you enter you simply toggle this variable this require backward grad sync and this is uh being toggled around and changed and this is the variable that basically uh if you step through it is being toggled to determine if the gradient is going to be synchronized. So I actually just kind of like to use that directly. So instead, what I like to do is the following. Right here before the last backward, if we are using DDP, then then basically we only want to synchronize. We only want this variable to be true when it is the final iteration. In all the other iterations inside the microsteps, we want it to be false. So I just toggle it like this. So require backward grad sync should only turn on when the microstep is the last step. And so I'm toggling this variable directly and I hope that that impacts lost up backward and this is a naughty thing to do because you know they could probably change the ddp and this variable will go away but for now i believe this this works and it allows me to avoid the use of context managers and code duplication i'm just toggling the variable and then lost up backward will not synchronize most of the steps and it will synchronize the very last step. And so once this is over and we come out, every single rank will suddenly magically have the average of all the gradients that were stored on all the ranks. So now we have to think through whether that is what we want. And also if this suffices and whether, how it works with the loss and what is loss-accum. So let's think through that now. And the problem I'm getting at is that we've averaged the gradients, which is great, but the loss-accum has not been impacted yet. And this is outside of the DDP container. So that is not being averaged. And so here when we are printing loss of coom, well presumably we're only going to be printing on the master process rank zero and it's just going to be printing the losses that it saw on its process. But instead we want it to print the loss over all the processes and the average of that loss because we need average of gradients so we want the average of loss as well so simply here after this this is the code that I've used in the past and instead of loss if we want loss so if tdp again then this is a pytTorch distributed. I import it, where do I import it? Oh gosh. So this file is starting to get out of control, huh? So if, so import Torch.distributed as dist, so dist.allReduce, and we're doing the average on Lossicum. And so this Lossicum tensor exists on all the ranks when we call all reduce of average it creates the average of those numbers and it deposits that average on all the ranks so all the ranks after this call will now contain lossacum averaged up and so when we print here on the master process the lossacum is identical in all the other ranks as well so here if master process oops we want to print like this okay and finally we have to be careful because we're not processing even more tokens so times ddp world size that's the number of tokens that we've processed up above and everything else should be fine. The only other thing to be careful with is, as I mentioned, you want to destroy the process group so that we are nice to nickel and it's not going to DDP and it's not going to complain to us when we exit here. So that should be it. Let's try to take it for a spin. Okay, so I launched the script and it should be printing here imminently. We're now training with 8 GPUs at the same time. So the gradient accumulation steps is not 32, it is now divide 8 and it's just 4. So otherwise, this is what the optimization now looks like. And wow, we're going really fast. So we're processing 1.5 million tokens per second now so these are some serious numbers and the tiny Shakespeare data set is so tiny that we're just doing like so many epochs over it most likely but this is roughly what it looks like one thing that I had to fix by the way is that this was model.configureoptimizers which now doesn't work because model now is a DDP model so instead this has to become rawModel.configureOptimizers where rawModel is something I create here so right after I wrap the model into DDP I have to create the rawModel which in the case of DDP is a model.module, is where it stores the raw NM module of GPT-2 as we have it, which contains the configure optimizers function that we want to call. So that's one thing that I have to fix. Otherwise, this seems to run. Now, one thing you'll notice is that when you actually compare this run and the numbers in it to just running a single GPU, you'll notice that this is a single GPU run with 32 RADACUM. The numbers won't exactly match up. And that's kind of a boring reason for why that happens. The reason for that is that in the data loader, we're basically just iterating through batches in a slightly different way, because now we're looking for an entire page of data. And if that page for all the GPUs, if that chunk exceeds the number of tokens we just loop and so actually the single GPU and the GPU process will end up resetting in a slightly different manner and so our batches are slightly different and so we get slightly different numbers but one way to convince yourself that this is okay is just make the total batch size much smaller and the B and a T and then um so I think I used uh four times 124 times eight so I used 32768 as a total batch size and then um so I made sure that the single GPU will do eight gradient accumulation steps and then at the multi-GPU and then you're reducing the boundary effects of the data loader and you'll see that the numbers match up. So long story short, we're now going really really fast. The optimization is mostly consistent with GPT-2 and 3 hybrid parameters and we have outgrown our tiny Shakespeare file and we want to upgrade it. So let's move to that next. So let's now take a look at what datasets were used by GPT-2 and GPT-3. So GPT-2 used this webtext dataset that was never released. There's an attempt at reproducing it called OpenWebText. So basically, roughly speaking, what they say here in the paper is that they scraped all outbound links from Reddit with at least three karma. And that was kind of like their starting point. And they collected all the webpages and all the text in them. And so this was least three Karma. And that was kind of like their starting point. And they collected all the web pages and all the text in them. And so this was 45 million links. And this ended up being 40 gigabytes of text. So that's roughly what GPT-2 says about its dataset. So it's basically outbound links from Reddit. Now when we go over to GPT-3, there's a training dataset section. And that's where they start to talk about Common Crawl, which is a lot more used. Actually, I think even GPT-2 talked about Common Crawl. But basically, it's not a very high quality data set all by itself because it's extremely noisy. This is a completely random subset of the internet, and it's much worse than you think. So people go into great lengths to filter Common Crawl because there's good stuff in it, but most of it is just like ad spam and random tables and numbers and stock tickers, and it's just a total mess. So that's why people like to train on these data mixtures that they curate and are careful with. So a large chunk of these data mixtures typically will be Common Crawl, like for example 50% of the tokens will be Common Crawl, but then here in GPT-3 they're also using WebText2 from before, so that's Reddit outbound, but they're also adding for example books and they're adding Wikipedia, there's many other things you can decide to add. Now this dataset for GPT-3 was also never released, so, some of the datasets that I'm familiar with that are quite good and would be representative of something along these lines are, number one, the red pajama dataset, or more specifically, for example, the slim pajama subset of the red pajama dataset, which is a cleaned and deduplicated version of it. And just to give you a sense, again, it's a bunch of common crawl, C4, which is also, as far as I know, more Common Crawl, but processed differently. And then we have GitHub, Books, Archive, Wikipedia, Stack Exchange. These are the kinds of datasets that would go into these data mixtures. Now, specifically the one that I like that came out recently is called FindWeb Dataset. So this is an attempt to basically collect really high quality common crawl data and filter it in this case to 15 trillion tokens and then in addition to that more recently Hugging Face released this FindWebEDU subset which is 1.3 trillion of educational and 5.4 trillion of high educational content. So basically they're trying to filter CommonCurl to very high quality educational subsets and this is the one that we will use. There's a long web page here on FineWeb and they go into a ton of detail about how they process the data which is really fascinating reading by the way and I would definitely recommend if you're interested into data mixtures and so on and how data gets processed at these scales look at this page and more specifically we'll be working with the FineWebEDU I think and it's basically educational content from the Internet they show that training on educational content in in their metrics works really really well and we're going to use this sample 10 billion tokens subsample of it because we're not going to be training on trillions of tokens we're just going to train on 10 billion sample of the fineweb edu because empirically in my previous few experiments this actually suffices to really get close to gpt performance. And it's simple enough to work with. And so let's work with the sample 10 BT. So our goal will be to download it, process it, and make sure that our data loader can work with it. So let's get to that. Okay, so I introduced another file here that will basically download FindWebEDU from Hugging Face datasets. It will pre-process and pre-tokenize all of the data, and it will save data shards to a folder on a local disk. And so while this is running, just wanted to briefly mention that you can kind of look through the dataset viewer here just to get a sense of what's in here. And it's kind of interesting. I mean, it basically looks like it's working fairly well. Like it's talking about nuclear energy in France. It's talking about Mexican America, some Mac, PyJs, etc. So actually it seems like their filters are working pretty well. The filters here, by the way, were applied automatically using LLM370B, I believe. And so, basically, LLMs are judging which content is educational, and that ends up making it through the filter. So that's pretty cool. Now, in terms of the script itself, I'm not going to go through the full script, because it's not as interesting and not as LLM-centric. But when you run this, basically, number one, we're going to load the data set which this is all Hugging Face code running this you're going to need to pip install datasets so it's downloading the data set then it is tokenizing all of the documents inside this data set now when we tokenize the documents you'll notice that tokenize a single document, we first start the tokens with the end of text token. And this is a special token in the GPT-2 tokenizer, as you know. So 50,256 is the ID of the end of text. And this is what begins a document, even though it's called end of text. But this is the first token that begins a document then we extend with all of the tokens of that document then we create a numpy array out of that we make sure that all the tokens are between oh okay let me debug this okay so apologies for that it just had to do with me using a float division in python it must be integer division so that this is an int and everything is nice me using a float division in Python. It must be integer division so that this is an int and everything is nice. Okay, but basically the tokenization here is relatively straightforward. Returns tokens in mp.un16. We're using un.16 to save a little bit of space because 2 to the 16 minus 1 is 65,000. So the GPT-2 max token ID is well below that and then here there's a bunch of multi-processing code and it's honestly not that exciting so I'm not gonna step through it but we're loading the data set we're tokenizing it and we're saving everything to shards and the shards are numpy files so just storing a numpy array and which is very very similar to torch tensors which is very, very similar to TorchTensors. And the first shard, 0, 0, 0, is a validation shard. And all the other shards are training shards. And as I mentioned, they all have 100 million tokens in them exactly. And that just makes it easier to work with, to shard the files, because if we just have a single massive file, sometimes they can be hard to work with on the disk and so sharding it is just kind of a nicer from that perspective and yeah so we'll just let this run this will be probably 30-ish minutes or so and then we're going to come back to actually train on this data and we're going to be actually doing some legit pre-training in this case this is a good data data set. We're doing lots of tokens per second. We have eight GPUs. The code is ready. And so we're actually going to be doing a serious training run. So let's get back in a bit. Okay, so we're back. So if we ls edu find web, we see that there's now 100 charts in it. And that makes sense because each chart is 100 million tokens. So 100 charts of that is 10 billion tokens in total. Now swinging over to the main file, I made some adjustments to our data loader again. And that's because we're not running with Shakespeare anymore. We want to use the find web charts. And so you'll see some code here that additionally basically can load these charts. We load the UN1616 numpy file we convert it to a torch.long tensor which is what a lot of the layers up top expect by default and then here we're just enumerating all the shards i also added a split to data loader light so we can load the split train but also the split val the zero split and then we can load the shards and also the split val, the zero split. And then we can load the shards and then here we also have not just the current position now but also the current shard. So we have a position inside a shard and then when we run out of tokens in a single shard, we first advance the shard and loop if we need to and then we get the tokens and readjust the position so this data loader will now iterate all the shards as well so i changed that and then the other thing that i did while the data was processing is our train loader now has split train of course and down here i set I set up some numbers so we are doing 2 to the 19 tokens per step and we want to do roughly 10 billion tokens because that's how many unique tokens we have so if we did 10 billion tokens then divide that by 2 to the 19 we see that this is 19,073 steps so that's where that's from and then the GPT-3 paper says that they warm up the learning rate over 375 million tokens so I came here and 375 E6 tokens divide 2 to the 19 is 715 steps so that's why warm up steps is set to 715 so this will exactly match So that's why warmup steps is set to 7.15. So this will exactly match the warmup schedule that GPT-3 used. And I think 7.15, by the way, is very mild. And this could be made significantly more aggressive. Probably even like 100 is good enough. But it's okay. Let's leave it for now so that we have the exact hyperparameters of GPT-3. So I fix that. And then that's pretty much it. We can run. So we have our script here and we can launch. And actually, sorry, let me do one more thing. Excuse me. For my GPU, I can actually fit more batch size and I believe I can fit 64 on my GPU, I can actually fit more batch size, and I believe I can fit 64 on my GPU as a micro-batch size. So let me try that. I could be misremembering, but that means 64 times 124 per GPU, and then we have 8 GPUs. So that means we would not even be doing gradient accumulation if this fits, because this just multiplies out to the full total back size so no gradient accumulation and that would run pretty quickly if that fits Let's go, let's go. I mean, if this works, then this is basically a serious pre-trading run. We're not logging. We're not evaluating the validation split. We're not running any evaluations yet. So it's not, we haven't crossed our Ts and dotted our Is. But if we let this run for a while,'re gonna actually get a pretty good model and the model that might even be on par with or better than GPT-2 124M. Okay so it looks like everything is going great we're processing 1.5 million tokens per second everything here looks good. We're doing 330 milliseconds per iteration and we have to do a total of... where are we printing that? 1973. So 19073 times 0.33 is this many seconds, this many minutes, so this will run for 1.7 hours. So one and a half hour run like this and we don't even have to use gradient accumulation which is nice and you might not have that luxury in your GPU in that case just start decreasing the batch size until things fit but keep it to nice numbers. So that's pretty exciting we're currently warming up the learning rate so you see that it's still very low, 1e-4. So this will ramp up over the next few steps all the way to 6e-4 here. Very cool. So now what I'd like to do is let's cross the Ts and dot Ris. Let's evaluate on the validation split and let's try to figure out how we can run evals, how we can do logging, how we can visualize our losses and all the good stuff. So let's get to that before we actually do the run. Okay so I've adjusted the code so that we're evaluating on the validation split so creating the val loader just by passing in split equals val that will basically create a data loader just for the validation shard. The other thing I did is in the data loader, I introduced a new function reset, which is called an init, and it basically resets the data loader. And that is very useful because when we come to the main training loop now, so this is the code I've added, and basically every hundredth iteration, including the zeroth iteration, we put the model into evaluation mode we reset the val loader and then no gradients involved we're going to basically accumulate the gradients over say 20 steps and then average it all up and print out the validation loss and so that basically is the exact same logic as the training loop, roughly. But there's no loss that backward. It's only inference. We're just measuring the loss. We're adding it up. Everything else otherwise applies and is exactly as we've seen it before. And so this will print the validation loss every hundredth iteration, including on the very first iteration. So that's nice. That will tell us a little bit about how much we're overfitting. That said, we have roughly infinity data, so we're mostly expecting our train and val loss to be about the same. But the other reason I'm kind of interested in this is because we can take the GPT-2-124M as OpenAI released it. We can initialize from it, and we can basically see what kind of loss it achieves on the validation loss as well. And that gives us kind of an indication as to how much that model would generalize to 124M but it's not an sorry to find web edu validation split that said it's not a super fair comparison to GPT-2 because it was trained on a very different data distribution but it's still kind of like an interesting data point and in any case you would always want to have a validation split in a training run like this so that you can make sure that you are not overfitting and this is especially a concern if we were to make more epochs in our training data so for example right now we're just doing a single epoch but if we get to a point where we want to train on 10 epochs or something like that we would be really careful with maybe we are memorizing that data too much if we have a big enough model and our validation split would be one way to tell whether that is happening okay and in addition to that if you remember at the bottom of our script we had all of this orphaned code for sampling from way back when so i deleted that code and i moved it up to here. So once in a while we sample a validation, once in a while we generate samples and then we do that only every 100 steps and we train on every single step. So that's how I have a structure right now and I've been running this for 1,000 iterations. So here are some samples on iteration 1,000. So here are some samples on iteration 1000. Hello, I'm a language model and I'm not able to get more creative. I'm a language model and languages file you're learning about here is or is the beginning of a computer. Okay so this is all like pretty, there's still a garble but we're only at iteration 1000 and we've only just barely reached the maximum learning rate so this is still learning. We're about to get some more samples coming up in 1100. Okay this is you know the model is still a young baby. Okay so basically all of this sampling code that I've put here everything should be familiar to you and came from before. The only thing that I did is I created a generator object in PyTorch so that I have a direct control over the sampling of the random numbers. Because I don't want to impact the RNG state of the random number generator that is the global one used for training. I want this to be completely outside of the training loop. And so I'm using a special sampling RNG and then I make sure to seed it, that every single rank has a different seed, and then I pass in here where we sort of consume random numbers in multinomial where the sampling happens, I make sure to pass in the generator object there. Otherwise this is identical. Now the other thing is you'll notice that we're running a bit slower. That's because I actually had to disable torch.compile to get this to sample. And so we're running a bit slower. So for some reason, it works with no torch compile. But when I torch compile my model, I get a really scary error from PyTorch. And I have no idea how to resolve it right now. So probably by the time you see this code released or something like that, maybe it's fixed. But for now, I'm just going to do end false and I'm going to bring back Torch compile and you're not going to get samples. And I think I'll fix this later. By the way, I will be releasing all this code. And actually, I've been very careful about making git commits every time we add something and so I'm going to release the entire repo that starts completely from scratch all the way to now and after this as well and so everything should be exactly documented in the git commit history and so I think that'll be nice. So hopefully by the time you go to github this is removed and it's working and I will have fixed debug. Okay so I have the optimization running here and it's stepping and we're on step 6000 or so so we're about 30% through training. Now while this is training I would like to introduce one evaluation that we're going to use to supplement the validation set and that is the HelloSwag eval. So HelloSwag comes from this paper back in 2019 so it's a five-year-old eval now. And the way Hellaswag works is there is basically a sentence completion dataset. So it's a multiple choice. For every one of these questions, we have basically a shared context, like a woman is outside with a bucket and a dog. The dog is running around trying to avoid bath. She, A, rinses the bucket off with soap and blow-dried the dog's head. B, uses a hose to keep it from getting soapy. C, gets the dog wet and it runs away again. Or D, gets into a bathtub with the dog. And so basically the idea is that these multiple choices are constructed so that one of them is a natural continuation of the sentence and the others are not. And the others might not make sense, like uses the host to keep it from getting soapy, that makes no sense. And so what happens is that models that are not trained very well are not able to tell these apart, but models that have a lot of world knowledge and can tell a lot about the world will be able to create these completions. And these sentences are sourced from ActivityNet and from Wikihow. And at the bottom of the paper, there's kind of like a cool chart of the kinds of domains in Wikikihouse. So there's a lot of sentences from computers and electronics and homes and garden, and it has kind of a broad coverage of the kinds of things you need to know about the world in order to find the most likely completion and the identity of that completion. One more thing that's kind of interesting about HeLaSwag is the way it was constructed is that the incorrect options are deliberately adversarially sourced. So they're not just random sentences, they're actually sentences generated by language models, and they're generated in such a way that language models basically find them difficult, but humans find them easy. And so they mentioned that humans have a 95% accuracy find them difficult, but humans find them easy. And so they mentioned that humans have a 95% accuracy on this set, but at the time the state of the art language models had only 48%. And so at the time this was a good benchmark. Now you can read the details of this paper to learn more. The thing to point out though is that this is five years ago and since then what happened to HelloSwag is that it's been totally just solved. And so now the language models here are 96%. So basically the last 4% is probably errors in the dataset or the questions are really, really hard. And so basically this dataset is kind of crushed with respect to language models. But back then the best language model was only at about 50 percent but this is how far things got but still the reason people like hella swag and it's not used by the way in gpt2 but in gpt3 there is hella swag eval and lots of people use hella swag and so for g3, we have results here that are cited. So we know what percent accuracies GPT-3 attains at all these different model checkpoints for HellaSwag eval. And the reason people like it is because HellaSwag is a smooth eval, and it is an eval that offers quote-unquote early signal. So early signal means that even small language models are going to start at the random chance of 25% but they're going to slowly improve and you're going to see 25, 26, 27 etc. And you can see slow improvement even when the models are very small and it's very early. So it's smooth, it has early signal and it's been around for a long time. So that's why people kind of like this eval. Now, the way that we're going to evaluate this is as follows. As I mentioned, we have a shared context, and this is kind of like a multiple choice task. But instead of giving the model a multiple choice question and asking it for A, B, C, or D, we can't do that because these models, when they are so small, as we are seeing here, the models can't actually do multiple choice. They don't understand the concept of associating a label to one of the options of multiple choice. They don't understand that. So we have to give it to them in a native form. And the native form is a token completion so here's what we do we construct a batch of four rows and t tokens whatever that t happens to be then the shared context that is basically the context for the four choices the tokens of that are shared across all of the rows and then we have the four options so we kind of like lay them out and then only one of the options is correct in this case label three option three and so this is the correct option and option one two and four are incorrect now these options might be of different lengths so what we do is we sort of like take the longest length and that's the size of the batch B by T and then some of these here are going to be padded dimensions. So they're going to be unused. And so we need the tokens. We need the correct label. And we need a mask that tells us which tokens are active. And the mask is then zero for these padded areas. So that's how we construct these batches. And then in order to get the language model to predict a b c or d the way this works is basically we're just going to look at the tokens their probabilities and we're going to pick the option that gets the lowest or the highest average probability for the token so for the tokens because that is the most likely completion according to the language model so we're just gonna look at the probabilities here and average them up across the options and pick the one with the highest probability roughly speaking so this is how we're gonna do hella swag and this is I believe also how GPT-3 did it. This is how GPT-3 did it as far as I know but you should note that some of the other evals where you might see hella swag may not do it this way. They may do it in a multiple choice format where you sort of give the context a single time and then the four completions. And so the model is able to see all the four options before it picks the best possible option. And that's actually an easier task for a model because you get to see the other options when you're picking your choice. But unfortunately, models at our size can't do that. Only models at a bigger size are able to do that. And so our models are actually slightly handicapped in this way that they are not going to see the other options they're only going to see one option at a time and they just have to assign probabilities and the correct option has to win out in this metric all right so let's now implement this very briefly and incorporate it into our script okay so what i've done here is i've introduced a new file called hellosw swag.py that you can take a look into and I'm not going to step through all of it because this is not exactly like deep code deep code it's kind of like a little bit tedious honestly because what's happening is I'm downloading hello swag from github and I'm rendering all of its examples and there are a total of 10,000 examples I am rendering them into this format and so here at the end of this render example function, you can see that I'm returning the tokens, the tokens of this four by T array of tokens, the mask, which tells us which parts are the options and everything else is zero, and the label that is the correct label. And so that allows us to then iterate the examples and everything else is zero and the label that is the correct label and so that allows us to then iterate the examples and render them and i have an evaluate function here which can load a gpt2 from hugging face and it runs the eval here and basically just calculates just as i described it predicts the option that has the lowest or the highest probability and the way to do that actually is we can basically evaluate the cross entropy loss so we're basically valuing the loss of predicting the next token in the sequence and then we're looking at the row that has the lowest average loss and that's the option that we pick as the prediction and then we do some stats and prints and stuff like that so that is a And that's the option that we pick as the prediction. And then we do some stats and prints and stuff like that. So that is a way to evaluate HelloSwag. Now, if you go up here, I'm showing that for GPT-2124M, if you run this script, you're going to see that HelloSwag gets 29.55%. So that's the performance we get here. Now, remember that random chance is 25%. So we haven't gone too far. And GPT-2-XL, which is the biggest, the GPT-2, gets all the way up to 49% roughly. So these are pretty low values considering that today's state of the art is more like 95%. So these are definitely older models by now. And then there's one more thing called Eleuther harness, which is a very common piece of infrastructure for running evals for language models. And they get slightly different numbers. And I'm not 100% sure what the discrepancy is for these. It could be that they actually do the multiple choice instead of just the completions. And that could be the discrepancy. But I'm not 100% sure about that. I'd have to take a look. But for now, our script reports 29.55, and so that is the number that we'd like to beat if we were training a GPT-2124M from scratch in ourselves. So now I'm going to go into actually incorporating this eval into our main training script, and basically because we want to evaluate it in a periodic manner so that we can track HelloSwag and how it evolves over time and see when and if we cross this 29.55 sort of region. So let's now walk through some of the changes to train GPT to that pipe. The first thing I did here is actually made use compile optional kind of, and I disabled it by default. And the problem with that is the problem with compile is that unfortunately it does make our code faster, but it actually breaks the evaluation code and the sampling code. It gives me a very gnarly message, and I don't know why. So hopefully by the time you get to the code base when I put it up on GitHub, we're gonna fix that by then. But for now I'm running without Torch Compile which is why you see this be a bit slower. So we're running without Torch Compile. I also created a log directory, log, where we can place our log.txt which will record the train loss, validation loss, and the LLSWG accuracies. So a very simple text file and we're going to open for writing so that it sort of starts empty and then we're going to append to it. I created a simple variable that helps tell us when we have a last step and then basically periodically inside this loop every 250th iteration or at the last step, we're going to evaluate the validation loss. And then every 250th iteration, we are going to evaluate Hella Swag, but only if we are not using compile because compile breaks it. So I'm going to come back to this code for evaluating Hella Swag in a second. and then every 250th iteration as well we're also going to sample from the model and so you should recognize this as our ancient code from way back when we started the video and we're just sampling from the model and then finally here um these are if we're not after we validate sample and evaluate heliswag we actually do a training step here and And so this is one step of training, and you should be pretty familiar with all of what this does. And at the end here, once we get our training loss, we write it to the file. So the only thing that changed that I really added is this entire section for HellaSwag eval. And the way this works is I'm trying to get all the GPUs to collaborate on the HellaSwag. And the way this works is I'm trying to get all the GPUs to collaborate on the Halaswag. And so we're iterating all the examples. And then each process only picks the examples that are assigned to it. So we sort of take i and mod it by the world size, and we have to make it equal to rank. Otherwise, we continue. And then we render an example, put it on a GPU. We get the logits. Then I create a helper function that helps us basically predict the option with the lowest loss. So this comes here, the prediction, and then if it's correct, we sort of keep a count. And then if multiple processes were collaborating on all this, then we need to synchronize their stats. And so the way, one way to do that is to package up our statistics here into tensors, which we can then call this.allReduceOn and sum. And then here we sort of unwrap them from tensors so that we just have ints. And then here the master process will print and log the HellasVag accuracy. So that's kind of it. And that's what I'm running right here so you see this optimization here and we just had a generation and this is step 10,000 out of about 20,000 right so we are halfway done and these are the kinds of samples that we are getting at this stage so let's take a look hello I'm a language model so I'd like to use it to generate some kinds of output hello i'm a language model and i'm a developer for a lot of companies hello language model uh let's see if i can find any fun one I don't know, you can go through this yourself, but certainly the predictions are getting less and less random. It seems like the model is a little bit more self-aware and using language that is a bit more specific to it being a language model. Hello, I'm a language model and like how the language is used to communicate, I'm a language model and are going to be speaking English and German. Okay, I don't know. So let's just wait until this optimization finishes, and we'll see what kind of samples we get. And we're also going to look at the train, vowel, and the heli-sway accuracy, and see how we're doing with respect to GPT-2. Okay, good morning. So focusing for a moment on the Jupyter Notebook here on the right, I created a new cell that basically allows us to visualize the train, val, and HeLa, and the HeLa score. And you can step through this. It basically like parses the log file that we are writing. And a lot of this is just like boring matplotlib code but basically this is what our optimization looks like so we ran for 19,073 steps which is roughly 10 billion tokens which is whoops oh my gosh which is one epoch of the sample 10b of FineWebEDU. On the left, we have the loss. And in blue, we have the training loss. In orange, we have the validation loss. And in red, as a horizontal line, we have the opening IGPT2-124M model checkpoint, when it's just evaluated on the validation set of this FineWebEDU. So you can see that we are surpassing. This orange is below the red. evaluated on the validation set of this FIneWebEDU. So you can see that we are surpassing, this orange is below the red, so we're surpassing the validation set of this dataset. And like I mentioned, the dataset distribution is very different from what GPT-2 trained on, so this is not an exactly fair comparison, but it's a good cross-check to look at. Now, we would ideally like something that is withheld and comparable and somewhat standard. And so for us, that is Helioswag. And so on here, we see the Helioswag progress we made from 25% all the way here. In red, we see the OpenAI GPT-2-124M model in red, so it achieves this helos flag here. And the GPT-3 model 124M, which was trained on 300 billion tokens, achieves green. So that's over here. So you see that we basically surpassed the GPT-2 124M model right here, which is really nice. Now interestingly we were able to do so with only training on 10 billion tokens while GPT-2 was trained on 100 billion tokens. So for some reason we were able to get away with significantly fewer tokens for training. There are many possibilities as to why we could match or surpass this accuracy with only templating training. So number one, it could be that OpenAI GPT-2 was trained on a much wider data distribution. So in particular, FWEDU is all English, it's not multilingual, and there's not that much math and code. And so math and code and multilingual could have been stealing capacity from the original GPT-2 model and basically that could be partially the reason why this is not working out. There's many other reasons so for example the Helioswag eval is fairly old maybe five years or so it is possible that aspects of Helioswag in some way or even identically have made it into the training set of FindWeb. We don't know for sure, but if that was the case, then we are basically looking at the training curve instead of the validation curve. So long story short, this is not a perfect eval, and there's some caveats here, but at least we have some confidence that we're not doing something completely wrong. And it's probably the case that when people try to create these datasets, they try to make sure that test sets that are very common are not part of the training set. For example, when Hugging Face created the FindWebEDU, they use HeLaSwag as an eval. So I would hope that they make sure that they deduplicate and that there's no HeLaSwag in the training set, but we can't be sure. The other thing I wanted to address briefly is, look at this loss curve. This looks really wrong here. I don't actually know 100% what this is, and I suspect it's because the 10 billion sample of FineWebEDU was not properly shuffled. And there's some issue here with the data that I don't fully understand yet, and there's some weird periodicity to it And because we are in a very lazy way sort of serializing all the tokens and just iterating on them from scratch without doing any Permutations or any random sampling ourselves. I think we're inheriting some of the ordering that they have in a dataset so This is not ideal. But hopefully by the time you get to this repo, some of these things by the way will hopefully be fixed, and I will release this build-nano-gpt repo. And right now it looks a little ugly and preliminary, so hopefully by the time you get here it's nicer. But down here I'm going to show errata, and I'm going to talk about some of the things that happened after the videoata and I'm going to talk about some of the things that happened after the video and I expect that we will have fixed the small issue but for now basically this shows that our training is not completely wrong and it shows that we're able to surpass the accuracy with only 10x the token budget and And possibly it could be also that the dataset may have improved. So the original GPT-2 dataset was webtext. It's possible that not a lot of care and attention went into the dataset. This was very early in LLMs. Whereas now there's a lot more scrutiny on good practices around deduplication, filtering, quality filtering, and so on. And it's possible that the dataset we're training on is just of higher quality per token. And that could be giving us a boost as well. So a number of caveats to think about. But for now, we're pretty happy with this. And yeah. Now, the next thing I was interested in is, as you see, it's a morning now. So there was an overnight. And I wanted to basically see how far I could push the result. So to do an overnight run, I basically did instead of one epoch, which took roughly two hours, I just did it times four so that that would take eight hours while I was sleeping. And so we did four epochs or roughly 40 billion tokens of training. And I was trying to see how far we could get. And so this was the only change and I reran the script script and when I point and read the log file at the 40b, this is what the curves look like. Okay, so to narrate this, number one, we are seeing this issue here with the periodicity through the different epochs and something really weird with the FineWebEDU dataset and that is to be determined but otherwise we are seeing that the heli swag actually went up by a lot and we almost we almost made it to the gpt3 124m accuracy up here but not quite so it's too bad that i didn't sleep slightly longer and i think if this was a 5 epoch run, we may have gotten here. Now, one thing to point out is that if you're doing multi-epoch runs, we're not actually being very careful in our data loader, and this data loader goes through the data in exactly the same format and exactly the same order, and this is kind of suboptimal, and you would want to look into extensions where you actually permute the data randomly you permute the documents around in every single shard on every single new epoch and potentially even permute the shards and that would go a long way into decreasing the periodicity and it's also better for the optimization so that you're not seeing things in the identical format and you're introducing some randomness in how the documents follow each other because you have to remember that in every single row these documents follow each other and then there's the end of text token and then the next document so the documents are currently glued together in the exact same identical manner but we actually want to break up the documents and shuffle them around because the order of the documents shouldn't matter and they shouldn't basically want to break up that dependence because it's a kind of a spurious correlation and so our data letter is not currently doing that and that's one improvement you could think of making. The other thing to point out is we're almost matching GPT-3 accuracy with only 40 billion tokens. GPT-3 trained on 300 billion tokens. So again, we're seeing about a 10x improvement here with respect to learning efficiency. The other thing I wanted to, and I don't actually know exactly what to attribute this to, other than some of the things that I already mentioned previously for the previous one. The other thing I wanted to briefly mention is the max LR here. I saw some people already play with this a little bit in a previous related repository and it turns out that you can actually almost like 3x this. So it's possible that the maximum learning rate can be a lot higher and for some reason the GPT-3 hyperparameters that we are inheriting are actually extremely conservative and you can actually get away with a higher learning rate and it would train faster. So a lot of these hyperparameters are quite tunable and feel free to play with them and they're probably not set precisely correctly and it's possible that you can get away with doing this basically. And if you wanted to exactly be faithful to GPT-3 you would also want to make the following difference you'd want to come here and the sequence length of GPT-3 is 2x it's 2048 instead of 1024 so you would come here changes to 2048 for T and then if you want the exact same number of tokens half a million per iteration or per step, you want to then decrease this to 32. So they still multiply to half a mil. So that would give your model sequence length equal to that GPT-3 and in that case basically the models would be roughly identical as far as I'm aware because again GPT-2 and GPT-3 are very very similar models. Now we can also look at some of the samples here from the model that was trained overnight. So this is the optimization and you see that here we stepped all the way to 76,290 or so. And these are the heliosmMagui cheat was 33.24 and these are some of the samples from the model and you can see that if you read through this and pause the video briefly you can see that they are a lot more coherent so and they're actually addressing the fact that it's a language model almost so hello I'm a language model and I try to be as accurate as possible. I'm a language model, not a programming language. I know how to communicate. I use Python. I don't know. If you pause this and look at it and then compare it to the model that was only trained for 10 billion, you will see that these are a lot more coherent and you can play with this yourself. One more thing I added to the code, by the way, is this chunk of code here. So basically right after we evaluate the validation loss, if we are the master process, in addition to logging the validation loss, every 5,000 steps we're also going to save the checkpoint, which is really just the state dictionary of the model and so checkpointing is nice just because you can save the model and later you can use it in some way if you wanted to resume the optimization then in addition to saving the model we have to also save the optimizer state dict because remember that the optimizer has a few additional buffers because of Adam so it's got the M and V, and you need to also resume the optimizer properly. You have to be careful with your RNG seeds, random number generators, and so on. So if you wanted to exactly be able to resume optimization, you have to think through the state of the training process. But if you just want to save the model, this is how you would do it. And one nice reason why you might want to do this is because you may want to evaluate the model a lot more carefully so here we are only kind of like winging the LswaggyVal but you may want to use something nicer like for example the Luther Luther evaluation hardness evaluation So this is a way to also evaluate language models and so it's possible that you may want to use basically different infrastructure to more thoroughly evaluate the models on different evaluations and compare it to the OpenAI GPT-2 model on many other tasks like for example, that involve math, code, or different languages and so on. So this is a nice functionality to have as well. And then the other thing I wanted to mention is that everything we've built here, this is only the pre-training step. So the GPT here is a, it dreams documents, it just predicts the next token. You can't talk to it like you can talk to just predicts the next token you can't talk to it like you can talk to chat GPT if you wanted to talk to the model we have to fine-tune it into the chat format and it's not actually like that complicated if you're looking at supervised fine-tuning or sft really what that means is we're just swapping out a data set into a data set that is a lot more conversational and there's a user assistant user assistant kind of structure and we just fine-tune on it and then we would basically fill in the user tokens and we sample the assistant tokens it's not a lot more deeper than that but basically we swap out the dataset and continue training but for now we're gonna stop it pre-training one more thing that I wanted to briefly show you is that of course what we've built up today was building towards nanoGPT which is this repository from earlier but also there's actually another nanoGPT implementation and it's hiding in a more recent project that I've been working on called llm.c and llm.c is a pure C CUDA implementation of GPT-2 or GPT-3 training, and it just directly uses CUDA and is written as C CUDA. Now the nanoGPT here acts as reference code in PyTorch to the C implementation. So we're trying to exactly match up the two, but we're hoping that the C CUDA is faster. And of course, currently that seems to be the case because it is a direct optimized implementation. So trainGPT2.py in LLM.C is basically the nano GPT and when you scroll through this file you'll find a lot of things that very much look like things that we've built up in this lecture and then when you look at train-gpt2.cu this this is the C CUDA implementation. So there's a lot of MPI and NICL, GPU, CUDA, C, C++, and you have to be familiar with that. But when this is built up, we can actually run the two side by side, and they're going to produce the exact same results, but lm.c actually runs faster. So let's see that. So on the left, I have PyTorch, a nanoGPT looking thing. On the right I have the lmc call and here I'm gonna launch the two. Both of these are going to be running on a single GPU and here I'm putting the lm.c on GPU1 and this one will grab GPU0 by default and then we can see here that lm.c compiled and then allocate space and it's stepping so basically meanwhile pytorch is still compiling because torch compile is a bit slower here than the lm.c nvcc c CUDA compile. And so this program has already started running and we're still waiting here for Torch compile. Now, of course, this is a very specific implementation to GPT-2 and 3. PyTorch is a very general neural network framework. So they're not exactly comparable, but if you're only interested in training GPT-2 and 3, lm.c is very fast. It takes less space. It's faster to start, and it's faster per step. And so PyTorch started stepping here, and as you can see, we're running at about 223,000 tokens per second here, and about 185,000 tokens per second here. So quite a bit slower, but I don't have full confidence that I exactly squeezed out all the juice from the PyTorch implementation. But the important thing here is notice that if I align up the steps, you will see that the losses and the norms that are printed between these two are identical. So on the left we have the PyTorch and on the right this CQt implementation and they're the same except this one runs faster. So that's kind of I wanted to show you also briefly Ln.c and this is a parallel implementation and it's also something that you may want to play with or look at and it's kind of interesting. Okay so at this point I should probably start wrapping up the video because I think it's getting way longer than anticipated. But we did cover a lot of ground and we built everything from scratch. So as a brief summary, we were looking at the GPT-2 and GPT-3 papers. We were looking at how you set up these training runs and all the considerations involved. We wrote everything from scratch. And then we saw that over the duration of either a two-hour training run or an overnight run, we can actually match the 124 million parameter checkpoints of GPT-2 and GPT-3 to a very large extent. In principle, the code that we wrote would be able to train even bigger models if you have the patients or the computing resources, and so you could potentially think about training some of the bigger checkpoints as well. There are a few remaining issues to address. What's happening with the loss here, which I suspect has to do with the FineWebEDU data sampling. Why can't we turn on Torch Compile? It currently breaks Generation and HeliSwag. What's up with that? In the data loader, we should probably be permuting our data when we reach epoPUB boundaries. There's a few more issues like that, and I expect to be documenting some of those over time in the build.nano.gpt repository here, which I'm going to be releasing with this video. If you have any questions or would like to talk about anything that we covered, please go to Discussions tab so we can talk here. Or please go to issues or pull requests depending on what you'd like to contribute or also have a look at the zero to hero discord and i'm going to be hanging out here on nanoGPT otherwise for now i'm pretty happy about where we got and i hope you enjoyed the video and i will see you later
Let's reproduce GPT-2 (124M)
14,486
Andrej Karpathy
20240609
We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations. Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar. Links: - build-nanogpt GitHub repo, with all the changes in this video as individual commits: https://github.com/karpathy/build-nanogpt - nanoGPT repo: https://github.com/karpathy/nanoGPT - llm.c repo: https://github.com/karpathy/llm.c - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - our Discord channel: https://discord.gg/3zy8kqD9Cp Supplementary links: - Attention is All You Need paper: https://arxiv.org/abs/1706.03762 - OpenAI GPT-3 paper: https://arxiv.org/abs/2005.14165 - OpenAI GPT-2 paper: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf- The GPU I'm training the model on is from Lambda GPU Cloud, I think the best and easiest way to spin up an on-demand GPU instance in the cloud that you can ssh to: https://lambdalabs.com Chapters: 00:00:00 intro: Let’s reproduce GPT-2 (124M) 00:03:39 exploring the GPT-2 (124M) OpenAI checkpoint 00:13:47 SECTION 1: implementing the GPT-2 nn.Module 00:28:08 loading the huggingface/GPT-2 parameters 00:31:00 implementing the forward pass to get logits 00:33:31 sampling init, prefix tokens, tokenization 00:37:02 sampling loop 00:41:47 sample, auto-detect the device 00:45:50 let’s train: data batches (B,T) → logits (B,T,C) 00:52:53 cross entropy loss 00:56:42 optimization loop: overfit a single batch 01:02:00 data loader lite 01:06:14 parameter sharing wte and lm_head 01:13:47 model initialization: std 0.02, residual init 01:22:18 SECTION 2: Let’s make it fast. GPUs, mixed precision, 1000ms 01:28:14 Tensor Cores, timing the code, TF32 precision, 333ms 01:39:38 float16, gradient scalers, bfloat16, 300ms 01:48:15 torch.compile, Python overhead, kernel fusion, 130ms 02:00:18 flash attention, 96ms 02:06:54 nice/ugly numbers. vocab size 50257 → 50304, 93ms 02:14:55 SECTION 3: hyperpamaters, AdamW, gradient clipping 02:21:06 learning rate scheduler: warmup + cosine decay 02:26:21 batch size schedule, weight decay, FusedAdamW, 90ms 02:34:09 gradient accumulation 02:46:52 distributed data parallel (DDP) 03:10:21 datasets used in GPT-2, GPT-3, FineWeb (EDU) 03:23:10 validation data split, validation loss, sampling revive 03:28:23 evaluation: HellaSwag, starting the run 03:43:05 SECTION 4: results in the morning! GPT-2, GPT-3 repro 03:56:21 shoutout to llm.c, equivalent but faster code in raw C/CUDA 03:59:39 summary, phew, build-nanogpt github repo Corrections: I will post all errata and followups to the build-nanogpt GitHub repo (link above) SuperThanks: I experimentally enabled them on my channel yesterday. Totally optional and only use if rich. All revenue goes to to supporting my work in AI + Education.
2024-06-10T13:47:09.642926
https://www.youtube.com/watch?v=iB8FWR9aD5Q
All right, Wiz. So loss functions. Are these things in GPTs? Yes. Are they in embedding models? Yes. Do we need to use them for fine tuning? Yes. How about reward modeling like RLHF? Also there, yeah. DPO? Has loss. End-to-end RAG like the domain adapted systems from our friends at RC? What did you believe at loss? Vision models? Loss. Multimodal models? models multimodal loss does everything have loss for training in machine learning yes huh seems pretty important no yeah it's pretty we don't want to lose out on it. Well, everything is not lost. Today, we are going to drive down to the heart of this thing, see if we can grok the essence of loss. Are you ready for this, man? I couldn't be more ready. Let's go. Okay. My name is Dr. Greg. That's the whiz today. We are digging down into loss. And this one has been one we've sort of been working towards for quite some time. If you guys want to go deeper, there's lots of content that supports us driving further and further down into the LLM. But we want to sort of combine our transformer series that included how we get stuff into the transformer, how we pay attention to it, how we predict the next token. we pay attention to it, how we predict the next token with this idea now of once we can get outputs, how do we train models like an LLM using this idea of loss? So with that in mind, please do ask questions along the way. Let us know how you guys are thinking about this and what issues conceptually and from a code perspective that you're having. And hopefully we'll have a great discussion and event together today. Let's go ahead and dive right into it. Logits and loss, everybody. This is going to be how we do the training and fine tuning that's really at the core of all of the awesome machine learning that's happening out in the world today. Number one, as we align ourselves to the sesh today, is we want to understand what loss is even doing. Very, very important. We want to understand cross-entropy loss today. In order to understand cross entropy, we need to understand entropy. And in fact, this will lead into understanding also relative entropy, which we'll see is useful for a number of things that you might already be familiar with. We want to learn how to take logits that are essentially the outputs of our LLM transformer decoder stack, and then compute loss based on those logits. And then really just trying to get a feel for how pervasive this concept is. I think you probably have some insight from our intro today, but hopefully we can give you a little more to chew on by the end of our hour together. All right, so why loss? Why logits? What can we learn about cross entropy? When we look at computing loss in practice with a tool like PyTorch, what does it look like under the hood, and how is it slightly more complicated and potentially slightly more streamlined than trying to think about this from a first principles perspective? We always want to be seeing our loss decrease as we're doing training. Many of you are probably familiar with this. And why loss? probably familiar with this and why loss? Well, it's because like in order to train, we have to know loss. And you know, the thing about like this word loss, like what is this thing that we're even talking about? Well, this loss function is not something that's new or super complicated from an idea perspective. The loss function idea is the same idea as a cost function in classical optimization. We need an optimization target. It's the same idea as an cost function in classical optimization. We need an optimization target. It's the same idea as an error function. Remember y equals mx plus b and we could sort of see how far off we were from our predicted line with the real data. We could calculate many different error metrics We could calculate many different error metrics, and those metrics would sort of give us a single number that we could then use to optimize our line that we were fitting to the data. This is exactly the same idea here. We want to take what we're actually seeing, our actual data. We want to take our sort of predicted line, in this case, through 2D space, but our prediction. We want to compare them, and we want to make sure that they're less far apart than they were before we were doing this step of training or fine-tuning or alignment. And if we do that, then we'll see that loss in essence really is the same, whether you're talking Y equals MX plus B, an MSE or chat GPT. Okay. But when we talk about chat GPT, when we talk about an LLM decoder stack, But when we talk about chat GPT, when we talk about an LLM decoder stack, we're not training models to sort of predict this two-dimensional thing. We're training models to predict the most likely next token. And so when we want to predict the most likely next token, we want to sort of frame this in a loss function through like a loss function lens. And so when we think about LLMs and we ask, well, what is loss exactly? We can say that we're sort of taking the distribution of potential next tokens that we have and we're sort of trying to move that distribution a little closer to the correct next token distribution. And the way that we're calculating this is with a metric called cross entropy. We're looking sort of across the distribution that we have to the distribution that we desire. And we might say, well, we're looking across what exactly? And the answer is looking at the logits and the targets. targets. And we talked a little bit about this in our next token event last week or two weeks ago. And we saw down in some of the weeds of the code that cross entropy is taking the logits, it's taking the targets, and this is happening during training. During inference, we don't need to compute loss because it doesn't really matter during inference. Now you might be sitting there thinking like, hold on, what exactly is going on? We just used a lot of words and maybe we could stand to sort of slow this down a bit. So let's do this, but a little more step-by-step. You guys remember the OG Y equals MX plus B. Well, we want to think of chat GPT in a similar way. What we're doing is we're giving an input X, in this case, I want to learn, and we're getting an output, how. And then what we're doing is we're shifting and we'll shift to want to learn how to. And we'll shift again to learn how to cook. And so on and so on. And we're going to shift as we move through from input to output in this generative AI tool. We're going to continually pick the next most likely token. Now, GPTs are what are known as auto-regressive. That means they predict future values based on past values. They predict Y based on X. So X goes in, Y comes out. This is the same as saying they are causal. Causal LM is something you'll see in the code a lot. And they're causal because what was put in causes what comes out. So at each step, we are deciding what's the next most likely token to be predicted. And when we do it well, and we sort of tune it for chat, we align it with the things that humans would expect, it actually sounds like you're having a conversation. Pretty cool. So picking the next most likely token is really picking the next most likely token in the generated probability distribution. Because when we come out of the transformer, we're going to generate a probability distribution of all potential tokens. And so this idea of what loss is doing is we're comparing the generated probability distribution to the desired probability distribution at each step of next token prediction. And so we can kind of see here, we'll have a Shakespeare example today, because it's just sort of a classic to use with nano GPT that we started leveraging for our last event. And we'll continue to sort of leverage as a learning tool, shadow Tondra Karpathy. You'll see that this sort of leverage as a learning tool. Shout out to Andrej Karpathy. You'll see that this sort of first input here is sounding very Shakespearean. Spur me, have I? And so you'll see sort of we drop the first token. We pick up the last token. We shift our window. And so we have me, have I is where we start, me, have I done. And we can sort of go continue through with our next token prediction using this context window. Now, if we sort of represent this input as a distribution and we represent the output as a distribution, just conceptually, you could sort of say that during training and during fine tuning, we're sort of taking this generated distribution and we're taking this desired distribution and we're trying to kind of merge these together. That's what we're trying to do. We're trying to sort of make these kind of the same. Ideally, if trained very well, these will be the same. Leaving aside, if you have a transformer and an AI that's actually so predictable, you want to sort of start mixing it up again, that maybe you choose to do that. But this idea of during training, we're moving these distributions closer together. We're doing this through the use of cross entropy loss. through the use of cross-entropy loss, our generated information that we're using to represent this are the logits. And our desired information that we want, those are the targets. So what we're doing is we're taking the logits and we're taking the targets and we're sort of bringing them together towards being able to predict the next token correctly based on the data we're training on. Spur me, have I done. on? Spur me, have I done? Spur me, have I done? I should say. Okay, so this idea of logits is very important. Targets is, well, simple enough. What are we aiming at? But why logits? Well, let's break down this word. It's kind of important. Logit just means the logistic unit. The logit is the log of the odds. Cross-entropy loss is actually also known as log loss. Put a pin, log, log, log, log, log, in that for just a moment. Cross-entropy loss requires us to understand entropy. And when we think about entropy, of course, there's lots of ways from a physical scientist's perspective to think about entropy, but entropy in a next token prediction context is very well thought of as surprise. Because if you know exactly what somebody is going to say, if you know exactly what your LLM is going to spit out, if what's coming out of the LLM or somebody you're talking to's mouth is highly probable, there's very, very low surprise. Not surprised at all. Absolutely knew it. Knew you'd say that. In other words, as probability increases, surprise decreases. We could say that these are inversely proportional to one another. We could say that the odds of being surprised decrease with more predictable words coming out of a machine or a human. And surprise then could be thought of as kind of the inverse of probability. And this is where we can start to connect some of these ideas, because if we're 100% sure about what somebody is going to say, then the probability that this happens is one. But the issue mathematically that we have, and this is one of the issues with just calculating inverse probability, is that if we take sort of one over the probability of one, sort of measuring how surprised we are, sort of measuring how surprised we are. We don't want to get one. We want to get zero. Because if we're not surprised at all, why are we returning a value that would indicate we're like super surprised? Rather, this is where we invoke our friend the logarithm because if we take the log of one over one that's the log of one that gives us zero it plays very very nicely with this idea that if we're 100% sure, then we're not at all surprised. So it's useful. Now, if we sort of take this idea of cross-entropy loss and sort of stack some of these ideas together, we might think of this as sort of the surprise loss. And of course, we can think of it as the log loss for obvious reasons, because we're using the log. And the logits, remember, are simply the log of the odds of the probabilities. So all this comes together. And there's really not a whole lot of really crazy ideas in here. We're just sort of trying to say, can I develop a mathematical model that sort of represents the way I think about this in reality. If a word comes out, is it surprising? And so we have this idea of entropy. This idea of entropy leads us to now being able to stack into this idea of cross entropy. into this idea of cross entropy. But let's talk about calculating entropy first. Conceptually, all that stuff and sort of wordsmithing is fine. This is originally from Claude Shannon of Bell Labs way back in the day, sort of the godfather of information theory. And generally you're gonna see an entropy equation that looks like this. But you can actually just rewrite this to make it sort of align with the discussion we just had a little bit better. and shout out to StatQuest on YouTube for crushing it and giving me some insights into this. Entropy like this kind of gives us this log of one over the probability which is kind of that surprise right so what we're actually sort of able to talk about is we're able to sort of talk about this in a more intuitive way where we kind of can say, well, what's the probability that I'm going to pick something out? And then how can I create a metric using that probability that sort of denotes surprise? And the example from the entropy video, which is a fantastic one, uses chickens, right? So this is very easy to grok this. We got six chickens that are orange, one chicken that's blue. And so what we are going to do is we're going to sort of calculate the entropy here by looking at what's the probability that I pick out an orange chicken if I select sort of at random any old chicken. And we're going to have the probability of six out of seven. We're going to put that six out of seven right into our surprise here. And the blue chicken will be probability one out of seven. Probability one out of seven. And we can get this number. The single number is the important idea here. Remember, when we have a loss function, we don't need a whole bunch of numbers. We need one number. That one number is going to drive our optimization. It's going to drive, ultimately, the performance of our algorithm that we're training. to drive ultimately the performance of our algorithm that we're training. And so we can take this sort of idea of entropy and we can now look across generated and predicted values, generated and predicted tokens. We can modify this slightly to look like this. Very simple. And when we look sort of across generated and predicted tokens, what are we doing fundamentally? Well, we're looking across logits and targets. taking this kind of distribution of, in our case, words, and we're saying, how do I measure how close my distribution of words that I have now is to the distribution of words that I have now is to the distribution of words that I want. And of course, this is easily implemented in code where we can simply invoke the very nicely laid out cross entropy function. And it'll pick up our logits that come right out of our transformer. It'll allow us to compute our targets. It allows us to take our targets and then compute, again, one single number from this. And we can get better and better and better at choosing whether I should be talking about grandma's funeral, grandma's house, or grandma's birthday. And again, this is only important during training. During inference, I don't need targets because I just have what I have. I don't need a loss function. I don't need to know exactly how far I am away from how far I want to be because I'm not trying to go there right now. I'm just trying to get an output. And so we're going to see an example here where we're going to use some Shakespeare data and we're going to compute actual logits. There's going to be a little bit more complexity than we talked about so far because computationally there's sort of a lot going on when we deal with LLMs. Things are quite large. But we're going to go ahead and hand it off to the whiz for a walkthrough on how to calculate loss right now. And then we'll come back and we'll sort of sit and we'll discuss, okay, what have we seen? What are the implications of this? What exactly can we take away from connecting the lower level theory to the higher level computations that are actually happening sort of at scale in these LLMs today? So let's sort of get some insights into how really everything is using loss with the Wiz as he walks us through loss 101 down in the weeds. Wiz, over to you, man. Yes. Okay. So you can see here where we're starting cross entropy loss, right? So this is the PyTorch implementation. This is what we're going to see today. I think the most important thing to focus on here, and I'll zoom in just a little bit more, is this idea that the criterion computes, or this criterion computes the cross-entropy loss between input logits and target. Now target in this case is going to be a specific index. Now what that means is, right, so when we think about our our output layer as Greg's been talking to us right one of the things we know about it is that it represents a vocabulary and every item in that vocabulary is going to be a separate token right so if we have say a vocabulary of ten words we have like the cat, you know, whatever it is, what we can do is we can actually show the number that that is in literally a list, right? So that's the index. Sorry, my brain melted for a second, but the idea is that's it, right? So that's going to be the target. And then on top of that target, that's going to be the logits that are associated with our predicted token. So what that means is that we're gonna wind up with two things to compare, just as Greg was saying, right? Which is one is going to be a big list of numbers with only one value at the index that is our desired token. And the other is going to be a bunch of logits that's going to have the same shape as that other list, right? And then PyTorch very conveniently for us, if we look into the, if we look into the implementation here, PyTorch is actually going to do the softmax for us. So we don't have to, and this is all going to occur during our forward. So we don't have to do this, this, you know, softmax first, we can actually rely on PyTorch to do this softmax first. We can actually rely on PyTorch to do that for us, which is super convenient. So what we have to pass to PyTorch is just the raw logits and then, of course, our target, the index of our target. And this is part of what we're going to be learning over the course of training is how to predict the correct target or how do we get the right index, right? Indexes here are mapped, literally mapped to tokens, right? Through our decoding step. Okay, so let's look at the notebook. The notebook's not super intense today, right? The most important part of the notebook is this, what we're going to get to in a second. Okay, so first of all, dependencies. We have a few. Just to remind everybody, we are using NanoGPT from Andrej Karpathy. Absolutely amazing repository. One of the best for learning about these systems we're going to get the dependencies that we need we're going to get the repo and then we're going to go ahead and we're going to move into the repo we're going to use the prepare data set just to convert our big long blobs of text into a big long blob of tokens we're going to see that we have 300,000 tokens in train and 36,000 tokens in bal. Nice ratio. I've hidden all the preamble code. We don't really need to worry about it, but we need it in order to run. So we have it tucked away here because what we want to focus on is the training loop. The training loop is where we're going to have this idea of loss come into play. And there's one thing that I want to focus on kind of before we get to the training loop, right? So you'll notice that we've just printed out our model. Our model is just this GPT model. Our model dict is in a transformer or transform as a model dict that has our word embeddings. Pay attention to this particular value, right? This is the text embedding. So this is the direct embedding from our embedding layer. And you'll notice that this has shape of 50,304, right? Now that happens to be of course, the size of our vocabulary. So right away we know, right? When we come in and we have tokens, we're gonna go to some internal representation through the embedding layer from our vocabulary, right? So this is the input is going to be tokens. And of course, those tokens are, they're just numbers that map to a specific word outside of this process. Because even though it's a bit counterintuitive, the transformer never actually sees text. It in fact doesn't even care about text. What it cares about is these internal representations and then going to and from tokens. Since tokens represent text, all we have to do is a lookup and convert the token to text, which we'll see in a second. But that's the idea. We have this input that has that size, our vocabulary, and then we have the output, which is the LM header, the language modeling head, which we talked about last week, which is also having output size 50,304. Again, our vocabulary, right? And so this is the kind of tying together of these these tokens to this concept of what we wind up caring about and we're going to see when we calculate our logits why this is important all right so now that we're going to go ahead and train we have to think about batch a little bit now the word batch here is super overloaded uh when we say we're going to get a batch what we mean is that we're going to get a batch, what we mean is that we're going to get some information from our training set that's in shape 12 by 1024. Now, the student among you have probably recognized that that 1024 is actually going to be our sequence length, right? So that is the total sequence length of our model, meaning that our model can capture sequences of size 1024. You'll also notice this 12, we're going to largely ignore this 12. The 12 just represents batch size, which we can take advantage of the parallel computing of a GPU and the fact that it has a ton of memory in order to do many passes at once, which is referred to as a batch. And so that's what this 12 represents. But the idea is that every time we get a batch with this get batch function, we're going to get a chunk of data for x and Y in the shape of 12, the batch size, times the sequence, or by the sequence length, 1024, right? So every time we call this, we're gonna get 10, 1024 tokens from somewhere in our training set for both X and Y. And you might ask, well, how do X and Y differ? Well, we can look at that here. If you notice, right, you'll see that, you know, this starts with 1, 8, 7, 1, 9, then goes to 11, then 3, 51, where our labels start at 11, right? We can see the 11s match up here, then 3, 51, they match up here, then 4, 65, they match up here. So the idea is that Y is just X, but shifted so that we have the next token. And that is the idea of our transformer network, right? It's a next token prediction machine. So what this is helping us to train and how we're going to leverage this when we're looking at our loss is this is what tells us our target, right? And if you notice, these are just numbers and all of these numbers are going to correspond to one of the potential indices in our vocabulary, right? This target is just an index. That's it, right? So this is the thing that PyTorch was asking for, the target, right? So that's all we need. What makes this very good is that, you know, it's just predicting the next token. Now, when we do our generation step, as you see, we can actually get a bunch of information all at once out of this, but that's just something to pay attention to. Okay, we're going to also import this tick token for our decoding step. The reason we're doing this is simply to make sure that we can look at what our input is in text. There's no other reason. So we're just using this to showcase what the input text looks like. Even though the model is not going to see this text, it is only going to see the token representation. So these representations that are just those, you know, integers. And remember that in between the input and output, the representation is completely different. We're only talking about the very beginning and the very end here. Okay, so now let's go into our training loop. There's a lot of stuff going on that I'm going to completely ignore to focus on the thing that we care about, right? So the thing that we care about is the logits and loss. Now, the way that we compute our logits and loss is by doing a forward pass in our model. So you'll see here that we have logits and loss, which is grabbed from our just forward pass, which is model. Zoom in a little bit here so we can see this a little bit better. And that's it, right? All we have is those logits and those loss from our forward pass. If we look at our nano GPT repository, we look at our model.py and we scroll down to our actual forward pass that we're going to use for the model. We're going to notice that all it does is generate those logits and loss. As you can see here, this is our forward pass, right? Those logits are just output from the language modeling head, which has that, you know, that output dimension of that 50,000 we talked about. And all we do is use the PyTorch cross entropy to look at all of the logits for that particular output and then compare them to our targets. That's it, right? So that's all we have to do. Now, if we don't have any targets and therefore we're doing inference, we ignore the part where we get all of the logits. We only need the last set of logits, which is what represents our next token prediction. But when we have targets, we're going to generate, we're going to look at all of them, right? So this is the key distinction between inference and training for this particular model. And that's it, right? So all that this is doing when we do a forward pass is it's giving us the output from the language modeling head, and then it's giving us the loss as calculated through the PyTorch cross entropy implementation. So that's very handy, right? If we go ahead and just close our window there, we can see that we can print these things out pretty straightforwardly, right? We're going to print out our inputs, our input in text, our targets represented in text, the shape of our logits, just so we can do some intuition building building and then our vocab size and sequence length which we get from our shape so let's scroll down to where training happens so in training we can see that our inputs truncated to last 15 tokens in the last batch were a bunch of numbers that makes sense and then the actual represented in text version is the error and be not fixed and doom perpetual hover right now what this means is that the model only saw these numbers of course but these this is the text representation now notice that the text representation of the target is the same thing but shifted forward one step right and that's important to keep in mind, again, because that's what we're comparing to. We're calculating that cross entropy loss, right? Across the whole sequence. So every set of tokens has logits. And then every set of logits has a label, right? Or a target. And what we're gonna do is we're gonna calculate all of those losses. So when we look at our shapes, we'll see that we have a vocabulary size of 50, 30, 5,304. And then our sequence length is going to be 1024. You can kind of, and you can see just the raw shape here, which includes our batch size. But the idea is that we have this, we have these logits, which are in the shape of our vocabulary. have these logits, which are in the shape of our vocabulary, right? So each position or index in that array that's of size vocabulary is a score for the index in that vocabulary, right? So what we're going to do is we're going to compare that to our targets at every single step of the sequence, right? So what this means, because our labels, if we look up above, right, our labels have this shape 1024, right? So what we need to, you see this here, the Y has shape 1024. Now, what this means is that we have 1024 labels, Now, what this means is that we have 1024 labels, and then we have 1024 logits, right? Now, labels are, you know, they don't have an extra dimension, so we're not going to count them here. But the idea is that for every input sequence, or for every sequence length logit, we have a label, which is the next token. And then we compute all those losses. And then we roll all of those 1024 losses up into a single value using averaging or addition, depending on your strategy. And then we'll be left with this, uh, you know, this, this list of 12, which is our batch size. Again, we're mostly ignoring it, we'll be left with a list of 12 losses that we'll roll up again to count as our total loss, which is how we wind up, despite having this multidimensional array, right? This is how we wind up with a single scalar, just the number, right? This is the idea. So with that out of the way, I'm going to go ahead and pass you guys back to Greg. And we're going to we're going to do some discussing. All right. Yeah. OK, so we are down in the weeds here. Let's go ahead and let's go ahead and take a look. See at this visualization of nano GPT, just to try to connect some things right now. We talked last time when we discussed the next token prediction and we saw just now from you, we're talking about this LM head and we're, we're sort of calculating these logits. So we're taking all of our vocabulary by our embedding dimension, and we're multiplying it by our embedding dimension times our sequence length, t. And what we saw for what you just showed us here is that we're seeing that actually this LM head vocab dimension is actually not three, as in this nano GPT, but it's actually 50,304. And this token sequence length isn't 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, it's 1024. And so like, we're dealing with some quite large matrices here. Now, I wanna sort of ask as well, if we kind of zoom in on what's actually happening down here, if we imagine that our logits here are this sort of 1024 by 5304, you said there's also 12 batches, right? Yes. And those batches, what's the best way to visualize them here? If we're sort of thinking about the batching. Yeah, it's just like a plate. Plate's going back, right? So we have this one two by two. Just imagine 12 of them stacked on top of each other. Oh, okay. Okay. So just like in the third dimension here, we're sort of stacking back. Okay. So there's just like, there's a lot going on under the hood of this pie torch stuff in short is kind of the point I want to make here. And then if we kind of consider, okay, like you also mentioned that under the hood of PyTorch, we're not doing the computation with logits. Rather, we're actually using the softmax logits to do this computation, but we don't handle it. We just give logits as input to the function. Is that right? Yes, that's correct. PyTorch does a special set of operations, which are the equivalent of what we want to do that are computationally stable and efficient. Okay. All right. So PyTorch is better at it than you are. Yes. Yeah. Okay. Way to go, PyTorch. All right. So we want to kind of expand the frame now and definitely, you know, start throwing your questions in the chat here. Because one of the things that we can actually start to understand, given this context, is we can start to understand that if we actually can calculate cross entropy by looking across two things, and we can calculate entropy, then what we can do is we can back out this idea of relative entropy. Another name for relative entropy is the KL divergence. Shout out to K and L, OGs in the game. And we can sort of take cross entropy and subtract out entropy. And in fact, we see this happening in a lot of places, right? So we did an event on RLHF some time ago, and this is like the classic diagram that everybody's like, oh, what is this? And this is like from Hugging Face. And we see this sort of KL prediction shift penalty. hugging face. And we see this sort of KL prediction shift penalty. So really the way we've been talking about this, if you've checked out our events, is we try to avoid this sort of super mathy discussion and rather we try to sort of break it down into more of a sort of tactical understanding. So kind of walk through how we can think about this in terms of like, what we're looking across. And then what we're sort of calculating the entropy of in RLHF. Yeah, so the idea is that we want to know the the delta or difference between the two distributions, right? So we have some distribution, you know, let's call it PPO, and then we have some distribution, let's call it base. And the idea is that we don't want these distributions to diverge too much, right? So the way that we do that is through this KL divergence, or sorry, KL prediction shift penalty. Yeah, yeah, yeah. Specific words on the screen. And what that does, basically, is it says, says like if your generation gets too far let's say away from the base we're going to receive a uh we're going to receive more penalty than if it doesn't and the way this is leveraged in uh rlhf is that we actually set a hard bound on the on on the, uh, on this Delta. So we don't allow it to exceed a certain, uh, a certain rate. And what this means is that we can, we can encourage our model to update its weights while being effectively clamped to our base distribution so that we don't wind up just producing like, say the token for cat 7,000 times. That's what the reward model would like, you know? the base model and our policy model. And we're saying, we actually don't want you to be too far away from the base model. And the base model is where we're calculating our sort of core entropy. The cross entropy is where we're looking across the models. And then we're sort of providing this constraint of do not shift too much beyond the base. And so very interestingly, we sort of have this reward modeling piece that's happening here as well. And we did mention that reward modeling is also an area where we're using loss. Of course, here we're updating the weights. Anytime you see weight updates, you should be thinking, ah, there's probably loss there. There's probably loss. So this is actually sort of just a check and it's not the thing that's doing the training. It's not the thing that's doing the actual fine tuning and the alignment and actually leveraging the loss function to update the weights. Rather, the reward model here is doing that. Is that correct? That's absolutely correct. Yes. So the KL divergence that we want to be low is simply like you can think of it like the railroad tracks, right? We we don't or well that's a bad one because it's stapled to those let's call it a let's call it a uh a mountain path to use the hugging face classic right where if we step too far off the path we're gonna fall off the cliff right so what we do is we say okay okay, you can walk within this path, but we actually use the input from the reward model, which is computed using a slightly different loss, right, to what we've been discussing so far in some cases. You could use an LLM there, of course, but if you say like a classifier, it's a slightly different loss, very slightly different. Um, and that's the thing that we're actually going to use to drive the, the, uh, magnitude of our updates. So one thing that I do want to be very careful about saying, we still do use the loss that we wind up with traditionally through cross cross entropy loss, but the magnitude of the weight impact or weight update is going to be dictated by the relative reward, where a low reward is going to encourage small or few or minor weight updates, and a large reward is going to encourage large weight updates. Okay. Okay. Okay. All right. So I just want to sort of go to this most classic example of what's happening here, because it's interesting in DPO. Now, I want to focus your attention in on the idea that we're doing O here. We're doing optimization. Anytime we do optimization, we need a cost function. We need an error function. We need a loss function. And in fact, that's exactly what we see here. It's more explicit in DPO. That's one of the sort of benefits of DPO is we're able to sort of put this very explicit loss function together. We are still doing our checking through KL divergence of reference versus aligned. And the loss function that we've again talked about previously is something that looks like this. And you can start to see some similarities between loss functions, even if they are a little bit different than maybe the one that we covered today. We're seeing the logarithm and we're seeing a number of other key terms in here for a slightly more complex system and optimization than the one that we're sort of using with nano GPT. So just to sort of get back to where we started here was we've got all of these things that use loss. Are they all using different loss functions or are they all sort of taking this idea of cross entropy and kind of using that as a starting point? What's the best way to sort of think about this and really extract the essence here. So let's say broadly of the listed technologies, cross entropy is going to be an important part of their loss puzzle. So just to be very, to be as robust to the small differences between them as possible. Now, because of that, right, there and this is typical kind of of the next token prediction game or the next, you know, pixel prediction, yada yada, right? Some of these, like, for instance, reward modeling, right, might use a slightly different loss function, depending on if we're doing say binary classification for uh our reward versus if we're trying to get an actual like regression score the dpo uses a modified version of loss which takes into account preference so where whereas for rlhf we just talked about this whole KL divergence reward model, add magnitude, all this other stuff, right? DPO kind of skips all of that and translates it back into a familiar cross entropy-y form. And what we can do with that is we can kind of in the actual cross entropy loss calculation, we can inject the preference information, which is where it gets its fun name of direct. So, and then for end to end rag, we have two different losses that we care about, right? We have a loss from our retrieval pipeline, and then we have a loss from our, you know, our generator, our LLM. Now these two losses are actually not the same kind of loss. They're two different losses, right? They are not both cross entropy loss, but thanks to the magic of a number need to get lower. The same intuition holds where we can, we can kind of Frankenstein these losses together. And because the idea is that a most optimized model is a model that has the lowest loss, we can just stitch them together. And we know that even if the sum of those two losses is always going to be higher, we still have the information that is relevant, which is that one loss that is better than the last loss will be lower. Regardless of which system improved, regardless of which piece of the puzzle changed, if the net loss goes down, we can say that the system is performing better in that loss landscape. Okay. And so just before we get to these sort of other modes here, so it's similar to the way we saw that, like, we could take the entire sequence length, we could sort of compute loss for each token in the sequence. We could amalgamate this loss all together into one lost number. We could look across all the batches and get all the, and then we're just one lost number. And then we're actually going to go and we're going to then back propagate this information through the network, update the weights, and then we're going to do better at achieving that output and that outcome next time. And so sort of no matter how you get this number, which you're going to use many similar techniques to the ones we use today, you're going to be able to do this. And in fact, I've noticed that there's a lot of innovation happening with modifying loss functions in sort of state-of-the-art papers. And I've noticed that researchers tend to be thinking about ways to do this, that our potential game change, DPO is a great example, right? That's right. And so looking out for this is something that maybe will encourage everybody. Maybe we'll leave vision multimodal if you have just 10 seconds on what we should think about this for today before we sort of move on to closing up the Q&A, final Q&A. Yeah, it totally depends on the possible outputs of the models. But for the most part, it's just going to be in the same kind of lost world, especially if it's like our text is our output, then it's very cleanly going to be exactly what we talked about with some different top layers in order to get that internal representation. There's no real extra or new information to add beyond that, I think, especially with short time. The idea is that we still care about minimizing our loss in order to represent our training data the best that we can. Okay. And then, you know, I was just struck by how this was slapping with a lot of truth as we dug down deep into this today, this particular classic, what a classic meme this is. And then, you know, we're kind of seeing from regression to classification, we see the loss functions, we see the training, we see the similarities. From ML to LLMs, we're seeing a lot of things really come together, which is very, very nice. And I think they've updated this meme to include Gen AI these days. I've seen that as well. So I think in closing, if we could sort of wrap this together, like cross entropy loss, aka log loss, is sort of telling us about like how surprised we are about the next token. It's allowing us to minimize this surprise. The logits, the log of the odds, help us to be not surprised when we put a high probability on something. I'm not surprised that I'm not surprised. Thanks, log odds. And if we look across those log odds and the targets that we want to achieve, we can take where we are, which is the outputs from the LM head into the logits. And we can take where we want to be, which are the targets, put that into PyTorches. Nice tool for us. And we train ultimately to be less surprised than we were before we started training. So like you do with training anybody or anything or receiving training yourself, you want to be less surprised when you see that stuff in the future. So we just have a couple of questions today and I'm going to go ahead and there's a funny one on grokking that I don't think we're going to do a session on grokking. Endless grokking is a thing that I don't know about yet. And for supervised fine-tuning, we use causal language modeling approach. What about unsupervised fine-tuning? Is masked language modeling used? This is a great question because I think it's an often confused point. Yeah. So technically what we showed today is the unsupervised version,. Unsupervised here is an interesting thing to get stuck on. What I'll say is that it's all supervised with language modeling at the end of the day, even if we're not forthright about that. Yeah, yeah, yeah. Supervised all the way down. Because we need an input and an output. But that's a good question and sort of a common one. Can the logits vector after softmax also be considered as an embedding? Rustam asks. No, I wouldn't. I wouldn't. You perhaps mathematically, we can get away with kind of like tricking ourselves into saying that, but it's not right. And in fact, I would think of I would encourage you to think of logits as not being a vector. It's just a collection of scores of arbitrary scalar values. They don't represent anything together, right? It's just a score for each pip in the possible buckets that can have score, right? So I would encourage you to not think of logits as a vector conceptually. Rather as a distribution with the buckets. Yes. And then same with Softmax explicitly converts us into the world of a probability distribution. And I'm sure there is some math that exists somewhere that could tell you that a probability distribution is an embedding, but we're not going to use that term as it relates to the rest of our systems, because that term has a pretty specific meaning in the rest of the model that we want to avoid confusion with. Yeah, yeah. We might embed some numbers into a probability distribution representation. Okay, okay. Yeah. All right. It's time to wrap up, everybody. Thanks for joining. I do think that we are going to potentially go further down this rabbit hole in about two weeks. I think we left off about where we need to start back propping through. So look out for the next step in our journey. Thanks for following us guys. And thanks for showing us the way, man. That's a wrap today, everybody. Thanks for joining us. Please do like and subscribe if you haven't yet. We're here every Wednesday doing our best to deliver real value to you guys that want to continue to learn at the open source edge of AI with LLMs. If you're still here, you're going to be great for our community. If you're not in it yet, please jump into Discord and start building, shipping, and sharing today. There's folks building, shipping, and sharing cool stuff from our courses and beyond in the Build, Ship, Share channel every day. And if you're interested in checking out our top-rated course on Maven called the AI Engineering Boot Camp, we are doing some really cool stuff with it. And keep an eye out. We'll be making some announcements soon for the next cohort, cohort four that starts in August. If you have any questions, feel free to reach out to me directly. And we will be continuing this series in the coming weeks. Check out our Luma page for upcoming events. We hope to see you there. Luma page for upcoming events. We hope to see you there. And finally, if you have any feedback, we'd love to hear it. And we'd love to hear what you guys would love to see next and how we can do better and better and better building, shipping and sharing next time. As always, we will keep building, shipping and sharing. We hope that you guys absolutely keep doing the same. See you on Discord. See you on LinkedIn, everybody. Have a great week. Bye, guys.
Logits and Loss: Training and Fine-Tuning LLMs
3,707
AI Makerspace
20240531
Join us as we unravel the essential role of cross-entropy loss in training and fine-tuning Large Language Models. Discover how this foundational loss function optimizes predictions, from standard methods like Low-Rank Adaptation (LoRA) to advanced techniques such as Direct Preference Optimization (DPO). Learn how cross-entropy loss helps make LLMs more effective for specific tasks and improves their performance. Don't miss this insightful session—subscribe and watch now! Event page: https://bit.ly/logitsloss Have a question for a speaker? Drop them here: https://app.sli.do/event/k5zBBvLo8oSCht1grbUeqc Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 For team leaders, check out! https://aimakerspace.io/gen-ai-upskilling-for-teams/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA How'd we do? Share your feedback and suggestions for future events. https://forms.gle/g4MyGn7NUWPxEhGY8
2024-06-12T11:16:51.226552
https://www.youtube.com/live/7N72dvQ7lDg?si=MiK5ER15YtFebGk7
Yo, Wiz, true or false? Transformers are basically just fancy classifiers. Yes. That's right. Okay. All right. How about technically the loss function used to train transformers and the loss function used to train classifiers are the same. They share very, very, very similar roots. Absolutely. Oh man. Okay. I'm excited for this one, man. Are you pumped to kind of get into the next token today and finish out our story of the transformer? I know I am. I'm at a loss for words, Greg. Oh yeah. All right. Let's get into it. Can't wait to have you back in a bit, we'll be discussing throughout the day. We will be digging in to the deets and we are here to discuss really one of the most important things that's at the root of all of the applications being built today in AI. My name is Greg. That was the Wiz. We are AI Makerspace. Thanks for taking the time to join us. Today, if you've got questions along the way, please drop them in the Slido link. Please drop them in the Slido link. Please drop them in the YouTube live chat. This should be a fun discussion, and we're excited to sort of kick off a new series of deeper dives into the Transformer and training inference and beyond. Everybody, welcome to the next token, how LLMs predict. So today is going to be kind of a fun one as we align ourselves to the sesh. What we really want to do is we want to kind of bring back the different pieces that we've talked about already, if you've been following, and we've got available if you want to dive deeper. We are almost at the end of the transformer stack. And once we can put in something to the transformer, get something out, we can do inference. And what's really cool is that once we can do inference and we can see what goes in and what comes out, then we can really talk about, well, how do we sort of compare what goes into what comes out, in and out, and optimize that, essentially do training. So we'll talk about what goes in, what comes out, exactly how we're predicting the next token after we're done with our decoder blocks. And then what implications does this have on training? And how can we connect this back to the classic roots of ML and AI that we all learned early in the game? So first we talk Transformer, then Token, then Loss. This should be fun, everybody. Let's get into it. So Transformer, of course, we see this kind of thing a lot. It's terrible. I personally can't stand it. I hope you can't stand it too, if you're watching our channel. And we've sort of discussed, actually, this is not the one that is mostly out there. The GPT style is the decoder only stack. We've talked about this in a previous event that you guys can check out. But if we zoom in on this, what we can see is we can see that there's some stuff sort of going in here, some embeddings, some positional encodings. We've talked about this stuff in a big long event we did talking about the Lego blocks, but we kind of got stuck because it took so long to talk about it. As we go into the decoder block, we run into masked multi-headed attention and just generally self-attention. We went into quite a bit of detail on self-attention, the QKBs and all of that in a previous event as well. Today, we are excited to finally dig in to the top of the stack here. And the top of the stack is really where we're predicting that next token. Everybody just says, oh, you know, the LLMs aren't so smart. They're just next word predictors. Well, let's take a look and see if that's true today. So as we go from in to out, we want to kind of see how this comes together. So as we start our journey into the next token here, I want to kind of flip this thing up on its head. One of the things that the best visualizations out there suffer from is consistency and which way is up and which way is down. So let's actually just go ahead and flip this bad boy on its head so we can get a little more intuition about how this is actually operating and we can get some better visualization of exactly how things are moving through the entire stack in a more realistic transformer. This is, of course, from the attention is all you need paper. There was this really great visualization put out recently that we're going to use today, and we're going to focus on taking a look at nano GPT. This, of course, looks like gibberish right here, but we're going to kind of focus in on how one of these things starts to come together. A little bit more complicated than the paper, a lot less complicated than what most of the stacks out there are actually looking like today. And if we sort of look at this nano GPT stack, we can see that there are three decoder blocks here. And these go through self-attention, add norm layers, feed forward network, aka multi-layer perceptron. And we're sort of moving through this stack. Now, like conceptually, very cool stuff is happening. What are we paying attention to and why? And we've talked about that in previous events. Today, we're getting a little bit more down in the weeds. And what we really want to do is want to kind of, again, focus here on this piece of the stack. So I want to take a closer look at NanoGPT with you. Now, if you're not familiar with NanoGPT, this is something that André Carpathi built, straight ledge, obviously, in the field. And it's a great one that we used to start with in a former class that we really were digging down into the details of it. It's a great one to start with in general. And I'm going to go ahead and show you guys what this thing looks like. If we look at our visualization and we kind of see it here, highly recommend that you check this thing out and you play around with it if you're interested in this kind of stuff. But if we kind of zoom in on the top here, what we can see, and this is a simple example, okay, what we can see is we can see that what we've got is we've got sort of our embedding representation here. Now we're trying to solve a very simple problem here. We're just using character level stuff and we've only got sort of three different characters abc that we're dealing with but we don't really need to pay attention to the specific use case as much as we just want to look at the structure here okay so what we have is we have our sort of our tokens of length t here all right that's that's going to be pretty important as we continue on. And what we've also got is we now have sort of for each token that we have in our sequence, we want to have an embedding representation of that token. And so if we look at our input embedding representation here, what we see is we see that we have T columns and we have C rows. Okay. So that means we have the number of columns equal to the number of tokens. And then we take our token embedding representation. We take our positional encoding information, smash these bad boys together and we have a number of tokens in the sequence by embedding length representation. Now this is pretty cool because this is the representation that we need to sort of stick with throughout. Because what happens is as we start to move into our transformer, what we see is that we see like there's a lot of stuff going on in the transformer here we can kind of see that if we track our embedding representation t by c we have a similar t by c layer norm representation we have a similar t by c projection that's coming out and a similar t by c pass to the next transformer block so you can look at attention you can look at the projection but more generally it's it's interesting to look at this as a whole transformer block the transformer zero block and you can kind of like scroll down as we sort of already talked about we've got transformer one transformer two zero one two transformer three and so what's kind of most interesting to us today is that when we get down to the very bottom of our output to the very bottom of our output, what we see once more is we see our good old friend T by C, tokens by embedding length representation. And so the question is, okay, well, what's happening to this exactly and how's it being transformed to give me the next token and this is sort of the crux of the issue that we want to dig into today all right and so we see a number of pieces to our puzzle um but we're going to sort of discuss how to move through these specifically discuss how to move through these specifically to get to a logits representation that's going to give us everything that we need to decide well what should our next token be Interestingly, once we move through our decoder stack, we actually need to decode. That is, we need to decode our T by C to decide how to choose the next token. Because what's happened in T by C is I'm in an embedding space, right? But what I need is I need to get back into a space of vocabulary, a space of language. Embedding space is a vector representation. It's not a natural language representation. So how do we go then from tokens or from a distribution, from tokens or from a distribution, this embedding space representation from two tokens. How do we go from this embedding space representation to tokens exactly? How do we do this? Well, in order to get some insight into this, let's zoom in a little bit. Recap. C by T goes in. C by T comes out. Now, as we look along the stack here, we see that we have language model head weights. And language model head weights are of size n vocab by into this embedding representation to get this distribution. Because when we multiply n by c times c by t, we get a vocab by token list. Okay. by token list. Okay. So this is, and for this, you'll recall, we've sort of got one, two, three vocabulary letters, A, B, and C. And then we've got sort of our token sequence length here. sequence length here. And so this is pretty simple from a matrix algebra perspective. In fact, Wiz would tell you it's pretty simple from a programming perspective, and you'll see that today. And maybe it is. What's really cool here is that we're sort of moving from this embedding space back to this sort of distribution. And this distribution then incorporates our vocabulary information. And what we can do is we can take this sort of unnormalized distribution of which word has the highest probability to be the next word and we can sort of somehow decide through these next two steps. So if we think about this idea of going from logits to Next Token, we're sort of talking about going from distribution to Next Token. And the way we want to think about doing this is by picking the most likely Next Token. Okay, but how do we pick the most likely next token? Well, the answer is the soft maximum. That's right, the soft maximum. We get the maximum, but we also get everything else too. We get the stuff that's kind of close to the maximum. Now, this is still a naive programming technique. That is one that prioritizes imperfect shortcuts for the sake of speed, simplicity, or potentially lack of knowledge. speed simplicity or potentially lack of knowledge but it's not a greedy algorithm like the hard max would be if we use the arg max this is just going to tell me the absolute highest probability and so this sort of if you use that sort of argmax hard version, it's just going to say, okay, boom, this is the most in the most predictable possible way right you'd be pretty boring wouldn't you you kind of sound I don't know a little bit like a robot a little bit like a machine and this is maybe not super ideal. So a softer max is a way to sort of say, well, maybe sometimes we don't want to pick exactly perfectly this top most likely next token. Maybe we want to add a little flexibility, a little softness to this. And there are additional ways that we do this, of course, in state-of-the-art models. Now, what's interesting to note here is that we could take this set of logits and go directly to a token by building a classifier head. We could do that. Only we choose not to do that here. Weird. And it has a little bit to do with softness. But I want to bring Wiz back up to talk a little bit more about how he views the world here. And I think the right way to sort of get into this is, what do you think the right way is to select a next token in general? Because that's kind of an important question, isn't it? Yeah, I mean, if I had the exact answer, I bet I'd be, I'd be on a cruise ship somewhere or yacht somewhere. But I mean, there's not a real correct answer here, right? How there being a right way to select the token, I think is opposite to our intuition that our intuition would say, you know, like, that's very situation dependent. It's kind of dependent on what came before, yada, yada. The idea is that there's not like a perfect way to select the next token. And in fact, you you might claim like well what if i uh you know what if i i always want to choose what the correct token would be as for my data well that's still not necessarily the right token because that's just going to reproduce our data set which is not uh traditionally a very useful thing if we wanted a data set, it would be much easier to just print it out, right? Then build such a complicated algorithm to print it out. That's right. So this idea of the right way is kind of like, well, I mean, in an inference space, it's not necessarily clear. And this is sort of a lot of where kind of prompting comes into play, a lot of flavoring it like, hey, you want to think about you're talking from this role, or you have this feeling, or you want to make the customer feel like this or that. And this is what is so interesting when you dig down deep into here is that we have enough flexibility there to be able to do this. And then, of course, the number one parameter that people talk about all the time, because I think it's like the one you get access to with Open AI, maybe that's why, is temperature. And I guess technically, we could call this a smoothing of sort of the distribution that's happening down there. This temperature idea where like a higher temperature makes a little more uniform and a lower temperature temperature will make it a little more concentrated on the higher probability points. How should we think about, you know, using temperature exactly when we're building these applications? Yeah, it's exactly what you described, right? So this idea that temperature scales are logits. So if we have logits and we scale them by a very, very big number, which is to say we divide them by a very big number then they're going to kind of get closer to quote get closer together right and so when we soft max that the distribution is going to be a lot smoother or flatter right more close to uniform and because we're picking tokens from that distribution, we're going to be more likely to pick tokens that, you know, that are quote unquote further down the line than say the token that was scored the highest. And the opposite is true, right? If we divide our logits by a number less than one, they're going to become bigger, right? It's the same thing as multiplying them by that factor. So the idea there is if we have a lot of space between each of our logits, right, where the biggest one is going to grow proportionally more than the smallest one grew, we get this idea of a very sharp probability distribution where our most likely token now dominates the probability distribution is much more likely to be selected. And what that means in practice is that we're more likely to get the expected next token when we have a very low temperature. So a temperature close to zero. Obviously, we shouldn't have a temperature of zero. So we can't divide by zero. But many libraries, you know, deal with that with with epsilon or whatever. But the idea is, you know, if we, if we always want the model to give us what's expected, we should choose or sorry, not always to be very likely to give us what is expected. We choose a very low temperature. If we want it to sometimes make like a left field choice in terms of the token, right, that it's going to select, we should use a high temperature. This is often expressed in kind of a simplified form right as the model will be more creative if you have a higher temperature and it will be kind of more like you know follow the the flow chart if you've got a very low temperature and we'll look at some examples in the notebook of exactly how that that looks yeah yeah yeah yeah okay so then you know if I'm thinking about the distribution here if I just wanted to like zoom way in on this and I just just wanted to sort of look at, okay, I've got sort of A, B, and C here. Yeah. You know, what's the sort of way that you think about? So I'm imagining this some sort of normal distribution across each of these, you know, vocabulary choices. And I'm sort of, you know, randomly picking one, but the distribution takes care of the fact that the more probable it is, the more likely I am to pick it. Is that the right way to sort of think about this? Sure. I probably would maybe move away from normal but like uh yes there's a distribution that has a peak and that has a you know a low end that's the beauty of softmax right whatever we have can be fit into that into that zero to one box so it can be expressed as a probability distribution it doesn't mean it has to be normal right with a very high temperature we're going to kind of move more and more towards actually uniform distribution where every token is as likely as the next token. Obviously, your temperature has to be exceedingly high to get to that point in a lot of cases. But that's the idea where we're at some point when we sample from that distribution,'re gonna get a token at random right any of them will do uh and uh you know it's like rolling the dice which token we're gonna select versus with a very low temperature is a chance right where is a chance we're gonna get these kind of like 0.00 to to the minus, you know, tokens, we're probably going to get the one that's 99.9 repeating almost to infinity, right? So it's, this is the idea when we're thinking about that, that softmax that comes after we scale our logits with that temperature value. And this is something else we'll discuss a little bit in the notebook is this idea of we also have another unit we can manipulate, which is that, you know, top K, right? So when we're talking about our tokens, not only can we think about temperature, but we can also say, well, even though we want some variability variability we don't really want it to pick the 18th most likely token or the 15th most likely token right or we only want it to consider the top five percent right right right because we kind of have this intuition that if you consider you know the the the if you have a vocabulary of 128 000 tokens right if you consider the 128 000th choice uh it's probably wrong yeah right the model's pretty confident about that so this idea that we can also kind of we can zoom into a specific set of tokens uh is is incredibly useful yeah like if i'm speaking and I'm not really sure if the next word makes sense, maybe I'll stay away from it. Yeah, that's right. Yeah, you know, but we want some variability in that kind of, you know, in that top 5% of tokens that, you know, are probably right. We want some variability within those, so sure. Yeah, and sometimes you're feeling bold and you're like, I'm not quite sure this fits here, but I'm feeling it right now. That's right. Yeah, I want to go with. Okay, so one more thing I just want to point out here just to make sure I'm thinking about this right. So this is like a token sequence here, this sort of dimension. And so it's important to understand, I think that we've got sort of this sequence of tokens that is in this representation, but then we're predicting only one token. But there is a bunch in the sequence. So the representation actually is of a larger number of tokens than just the one you're predicting, right? And like what is that sort of... How do you think about that? Is that sort of the context window here that we're sort of predicting? What's the sort of easiest way to think about this from a practical perspective? Honestly, we just care about the last one. So I wouldn't think about it too hard. But the idea is we do compute everything and we only really care about the last one, right? So even though we are going to compute a bunch of things, it's kind of like, well, we don't need to predict it, right? Because we already have it. So it's, there's not a ton of use to consider it. So we just consider the last position as the index of the token that we care about. Yeah. And so then as sort of vocab increases it goes from three to ten thousand or a hundred thousand as sort of the uh the number of tokens in the sequence increases as we go from i don't know let's say 8k to 128k then we start to get into the situation where when we do inference we'll really want to take care of those things that don't matter in smart ways right yeah oh yeah yeah yeah well yeah that's a whole another discussion yeah yeah okay good stuff so by zooming in we can also zoom out and this is sort of the the benefit of understanding some of this stuff you said you just said logits i'm going to dig in and we'll come right back to it wiz let's take another look at how we can think about this from an even simpler perspective now. If we zoom in even further, this from the illustrated transformer, and we're going to flip this thing back up the way it was because Jay Almar put this out in a similar fashion years ago. So if we sort of have this decoder stack output here, now I'm not in a sort of token sequence by embedding representation form anymore. I'm in sort of a, I'm just in a single token, a single word in this case, by embedding representation C. And so as I sort of watch this thing move into and through my linear layer, I'm going to get now this number of words in the vocab that I'm going to be able to assess what's the probability of the next word. And of course the way I did this was I came in and I multiplied the attention head weights number of vocab by C here but instead what we have is we have just this 1 by C instead of T by C and so what happens is we end up in a space where we now are going to have a 1 by C times a C by N, giving us a 1 by N. And so that is going to now allow us to take our unnormalized logits and then normalize them, soft max them into these log probabilities that are going to serve as the basis for which I can do simple things like argmax right or I can do more complicated more flexible more interesting things especially if I have a more interesting problem I'm trying to solve in this case maybe I'm just going to select the word straight up. Okay, so this is sort of the one dimension, one word example that you can sort of think about, look at. And I want to kind of bring Chris back up for a little more discussion here before we actually see how to predict this. So we've got now this logits and this logprobs and this softmax. And I just want to kind of break this down for everybody. How do you think about this? What's the difference between logits and logprobs and softmax? Yeah. So logits are in the worst way to say it, right? But the technically correct is the unnormalized, raw unnormalized scores, right? So this is the idea that like, when we project our output, right, from our embedded dimension to our linear layer. So we're moving it into that vocabulary space. We know the attention that's computed is an attention score. That's what we care about. We care about this idea of a score. What is a value that indicates how much we should care about this thing right we quantify that with the with this idea of an attention score when we project that into our vocabulary space what we wind up with is we wind up with a huge set of scores for every token in our vocabulary right and those scores before we do anything to them are called logics. So this is this idea that they're raw and they're unnormalized. So they're just like values, right? And they can be whatever you want. They can be small, they can be negative, they can be positive, they can be large, doesn't matter. But they're scores. When we move to through softmax, right? So when we move through softmax right so when we move through softmax when we take all of those logits and then we use that softmax uh transformation right or this the softmax layer in this case which is just going to apply softmax to those to those scores right that's how we move to those, the probs or the probabilities, right? This is the idea that, you know, probability distributions famously, right, have a few rules that they need to follow. They need to be between zero and one, right? Because they all have to sum to one, the whole distribution has to sum to one. And so this is the idea that we have to sum to one the whole distribution has to sum to one and so this is the idea that we have to transform this kind of mess which is still valuable still incredibly valuable still useful to think about right we can do cool as we just talked with temperature we can do cool stuff with our logits to to help us change what the resultant probability distribution is right but this idea that we have to eventually go to a distribution in order to interact with it in a way that we're used to. So sampling from it, et cetera. And this is that conversion. So the log probes is ultimately what we're going to use to pick a token, right? And we're just going to sample it from that distribution. So that means that- All the jobs, aka Logits softmax. That's right. That's right. Yes. Yeah, absolutely. Yeah. Killer, killer. Okay. So you're going to show us how to disencode? Is that right? We're going to move through it. Yeah. We're going to show everybody. Yeah. Okay. All right. So next up, guys, we've got actually predicting a token. And I'm going to hand this one right off to you, Wiz. Take it from here, man. All righty. Thank you, Greg. So we've got our Colab notebook open. It'll be shared in the chat for you guys. And the idea of this is pretty straightforward. We're going to approach it from kind of the most, the way that we use it, and then we're going to approach it from a from kind of the the most the way that we use it and then we're going to we're going to needle down so first of all we're using the lgpt from carpathy it's just like the best it's nice and minimal it works you'll love to see it we're going to get the additional dependencies that are required for this repository that aren't included in colab the rest of them are just included in colab by default. You will need a GPU environment for this in order for it to work well. We'll clone the repo. We'll move into the repo. And then we'll just look at how it works, right? So we'll say things like we can sample from GB2XL. We're going to put an input sequence of this text. What is the answer to life, the universe, and everything? And then we're going to want a response of one generated sample. And we're going to want a maximum of 100 tokens. And we get like the classic, right? So we get the, what is the answer to life, the universe, and everything? That's what we asked. One possibility is that they have a universe-wide perspective that allows for no contradictions. And if so, then why is there... I mean, this doesn't make sense, but it's related to the question and it's got words like Big Bang Theory. So we're clearly in the right place. This is just GPT-2, right? We've learned to expect a lot more from our language models. But the idea is that we pass in text, and then we receive out text. Now, there's a lot that goes on in between. So let's take a little bit of time to look at it. Number one, you know, we have to get into this. We have to get into this idea of auto regression, right? Causal, right? So these words, what do they mean? And basically what they mean is you take an input and generate a single token and then append that token that was generated back to the input and do it again over and over and over again. Auto regressive, right? So this is the idea. So how does this manifest in code well if we look at the generate function in the nano gpt uh library we're gonna get an insight pretty quick so the generate takes some uh list of indices it takes a max new tokens it takes a temperature and it takes a top k a temperature and it takes a top K. The indices are the actual things we have in our sequence so far, right? So in this case, indices are going to be the tokenized output of this. This bit right here. So what do we do with those? Well, you know, our max new tokens comes in pretty quickly. We just repeat this step, right? A number of times equal to the maximum new tokens, easy peasy. And then we're going to do some truncating. So what we're going to do is make sure that the index fits in our block size. If it doesn't fit in our block size, we're going to do is make sure that the index fits in our block size. If it doesn't fit in our block size, we're going to have a bad time. And the way that we do that is we just lop off the top of it if it doesn't fit in the block size. So we can keep generating forward, but we're going to lose context from the beginning, right? So we talked about this in our long context event we did last week. If you want to go more into the context window. Then we're going to get some logits, right? So classic. We need our logits. Without logits, what are we doing? So the way we do that is we do a forward pass of the model, right? The forward pass of the model is just going to give to us logits and it is going to give to us a potential loss. In this case, loss is going to be none. So we just chuck it into this wildcard, don't care about it. So now we have logits. Remember, the logits are the unnormalized scores. So what this means is that they are just the, they're just a representation, right, of how highly certain tokens scored. And the logits exist across our whole vocabulary. So every token in our vocabulary has a score, right? So this is the idea. We're gonna just not care about all of the other logits in our sequence and we're only gonna get the final set of logics, right? Logics are the next token. So this is the last position in our logics, which you can see here. I know we're kind of dealing with like these 3D arrays, right? So we got these sweet tensors. Thanks, transformers. But the idea is, right, so this is for our batches. So we want to include all of our batches. We only want the last element, right, of our sequence. So that's the last token, what will become the next token. And we want all of the logits for that next token, right? Because that's how we're going to determine what the next token is. So we're then going to scale it by our temperature. We'll look in a second how that math checks out and then of course we're going to use our top k so top k all it's doing is saying you know hey uh we're gonna we're gonna get our top k from our logits and we're gonna either choose our top k or whatever the size of our logits are. This is to say, if we select a top K that is larger than the number of elements in our logits, then we're just going to, that's all of the logits, right? Or that's all of the elements in our logits. So this is all this is doing. And then all we're going to do is we're going to say, hey, you know, for every logit, if it's not being considered, we can't remove it, right? We can't eliminate it. We still need that information because it matters how many elements we have, but we're just going to set them to minus infinity. Minus infinity classically is going to be not selected no matter what transformations we do to it, right? Because it will always be the smallest thing. So it will never be selected. But this is the idea. We basically just say, hey, for every logit, right, that is outside of our top k, boom, you're minus infinity. We don't care about you anymore. Then we do the old softmax arena, right? We do this across the correct dimension to make sure that we are doing softmax for all of those elements in each of the logits. And then we are going to sample from that resultant probability distribution to get our next index. ability distribution to get our next index. And then we are going to concatenate our existing indices with the next index. So what does this mean? Basically, the result of this sampled index is going to be the token that we want next. And we are then going to append it to the next token or to the existing list of tokens. So it's going to become the next token, right? And then we just return at the end, whatever we've whatever we've concatenated. So because we're going to do this in the example, 100 times, we'll have repeated this process of 100 times, we'd have added 100 new tokens, and we're going to return IDXx which is now going to be 100 uh tokens longer and that's that's the thing right so but there's a lot of kind of there's a lot of stuff happening right here and then there's a lot of stuff happening uh kind of kind of here right so let's let's do a little bit of a zoom in so what are logits uh greg's walked us through that so that's's great. And what is temperature doing, right? So I think top K is intuitive, right? We choose the top K logits. So if you are among the highest K logits, you will be kept. Everything else turns into minus infinity, right? So what we're going to do with temperature is we're going to see how temperature influences the resultant probability distribution. So if we have this, let's just pretend our logits are 6, 2, 7, 0.1, minus 8, 9, right? We're making stuff up, doesn't matter, right? If we were using argmax, we'd just say this is the one, right? This index is the one we care about uh so this is what our token is going to be right but of course we are not actually uh going to use argmax right that and you can greedy decoding is great you just always pick the most likely next token every time right there's no chance you do anything else that that is a valid decoding strategy though you might find that the the resultant generations lack creativity and and the like uh so what we're gonna do is we're gonna scale these logits by our temperature in this case is just one and then we're going to do our soft max operation to get a distribution you'll see that we got a pretty wide range of values everywhere from uh you know 8 8.43 to the negative one, all the way down to this, you know, to the negative eight. So very small value. The idea here is that this is the highest, right? So nine is the highest, but the six is, you know, it's not doing so bad, right? We have this seven, right? That's doing a little bit better. So it's not like nine is the only thing that matters in that distribution, right? It's a, if we want to be very crass about it, right? It's like 84% chance we're going to select that nine. But that means that there's a 16% chance that we're not going to select that nine, right? So now if we look at a very low temperature, we can see these logits, they get pretty big, right? So we go from nine all the way to 90. And then when we look at our resultant probability distribution, we can see that all of the others are very small, up to and including one that is to the negative 74, and the 9 is 99.9999998% chance to be selected, right? So when we scale those logits by that low temperature, we make the one that is highest, right, much more likely to be considered. And then the opposite, if we use a large temperature, we can see that they kind of all settle into the same kind of range, right, where we have the instead of 99.99 with low temperature, or about 8085 with with with temperature equal one, we get this, you know, it's only 27% likely to be selected when we use a high temperature, right? Versus the second place, which is 22%. So this is the idea of temperature and how it modifies that resulting softmax operation that we do. And the idea here, again, is, you know, this is a smoother, more close to uniform distribution, which means we're more likely to get, you know, indices that we wouldn't have before. And then, you know, we get a very sharp or distinct distribution. There you go. So, this is the idea. So, for generation, we kind of get this pseudocode, right? For some range, which is user decided, we check and make sure our current range of indices will fit our block size. If we don't, we trim it so it does. We get the logits for the provided indices. We scale the logits. We optionally crop the logits. We apply softmax to converter logits into a probability distribution. We sample from that probability. We add the sampled index back to our input indices. That's the auto-regression coming in, right? And then we're done. So how do we actually get to logits? So we understand we're using logits, right? And those logits exist across that whole vocabulary space, but how do we get to logits, right? We know we're getting attention scores when we're using our our uh our our decoder stacks right we're we have this idea that we're scoring things right where where certain certain uh things are gonna be scored higher based on the relationship or providing surrounding context yada yada so the idea is that we have this idea of an LM head and that all it's doing, right, is it's taking the output of our decoder stacks, which is our input dimension, which is this N embed, right? That's the internal dimension of the model. And then it's going to project it onto our vocabulary size. So this idea that we're going to literally project from our embedding, our internal embedding space or representation, sorry, to our vocabulary. This is that's it. This is a learned process to be very clear, right? It is learned. So this is spoiler alert. This is where loss is going to come in, right? But the idea is that we're just projecting to get to that vocabulary space, right? Because it's not useful. The internal representation of the model is not useful outside of the model, right? So we need some way to project it into a space where it is useful. And we're going to use our linear layer to do that. And that's it. That really that is the that's the thing that does it right. There's there's nothing more complicated in nano gpt in fact there is no softmax that is applied in the model right it's applied afterwards now that doesn't have to be true you can do it in the model if you want but uh the uh nano gpt is going to do this as part of the generation as opposed to part of the actual model so there's no softmax layer in uh in nano gpt now you know what's this like how do we get from logits to loss now right so we've got to tie this into loss somehow so the idea is right we get our decoder block to take input and compute attention scores we project those scores from our internal model dimension onto our vocabulary then we use the obtained raw annular scores logits to find a probability we sample we use the obtained raw, a number of scores, logits, to find a probability. We sample, we append the token, we rinse and repeat. So how does this relate to loss, right? Well, how do we know, how do we know what to score things? And how do we know the scores are correct? Or what's their degree of correctness, right? While we're trading, we need something that we can target, right? This is the whole idea of machine learning. We need labels or targets, right? While we're trading, we need something that we can target, right? This is the whole idea of machine learning. We need labels or targets, right? So how do we know to produce certain scores for certain tokens? This is where loss is going to roll in for us. And with that, I'll pass you guys back to Greg. Yes. Awesome. Masterclass on predicting the next token. And as you guys can probably imagine, we're going to continue this series by covering loss in depth, but we want to give you a little bit of an insight into it. Now, make sure that we're leaving you off at a good stopping point. You know, as was mentioned, in order to train, we have to know loss. And, you know, the sort of logic train is something like this. We want to train models to predict the next most likely token. And so then in order to do that, we need this loss function. But what is loss? Well, if we think about these logits, that's our distribution, let's say, we want to measure the distribution that we have, and we want to make it a little bit closer to the distribution we desire. So we have some training data and that training data represents the next words we would expect to predict. We wanna make sure we're able to do that. So how do we calculate this loss exactly? Well, interestingly, and unsurprisingly, if you've got a machine learning background, perhaps, we use cross entropy loss. Now, this is something you might be familiar with. This is going to be the topic of our next event on this, where we dive into lots of depth on cross entropy and exactly what's going on but this does connect back to our initial hot take that lms are kind of classifiers okay when we talk about cross entropy here the question is what are we looking at the cross entropy of between which two things are we looking, and we're looking at the logits and also the targets. And the way that we sort of differentiate the logits and the targets is we can think about the logits as being the sequence up to the next word. And we can think about the targets as being a shifted sequence up to the word we want to predict. In order to train, we need input, we need output. And interestingly, this is what it looks like down in the nano GPT code for the cross entropy calculation. If we don't want to train, that means if targets is not available for us because we're not training, what are we doing? We're doing inference. And when we do inference, we're using that language model head on the very last piece. on the very last piece. So here we can see this sort of auto-regressive nature where we're predicting future values based on past values of our decoder stack transformer. We're shifting as we go during training. And this is what we will give you some insight into in our next event. I want to go ahead and bring Wiz back up to the stage to discuss a little bit about our question we began with. And, you know, we can think about this difference between logits and targets as manifesting in this shift, Wiz, but maybe we can just end on this question and come back uh to exactly where we are next time and you guys shout out in the chat any questions that you have or please use slido but are transformers just classifiers wiz uh what's going on here have we have we seen that they kind of are yeah i mean kind of is exactly right they're kind of class. I mean, kind of is exactly right. They're kind of classifiers. I mean, they do classify right, like, we're not going to use them like classifiers very, very, you know, specifically, and if we wanted to, we'd use a different head. But the idea is, yeah, I mean, they're classifying in some sense, definitely, right right we are predicting of a possible set of uh classes or in this case else com tokens right which is the which is the most likely uh you know of of the batch now the way we're using the model and the auto regression kind of moves into prediction task right but uh the guts of it are very close to uh you know straightforward classification so we're using auto regression and classification then is that is that what we're doing yeah i mean uh it somehow it's not surprising right it's not no we have to pick the next token which means we have to know what is the, given what we have so far, what, what token follows, right? Like it's a, uh, we, we have to pick, uh, of, of a set of tokens, the next one. And so there's, there's fields a little classify even without knowing the internals. Yeah, yeah, yeah, yeah. Okay, well, I wanna go ahead and invite everybody to add your questions and we're gonna go try to cover as many as we can, but question from the audience was, how do we actually train these networks when there are multiple possible answers? Say that again? How do we actually train these networks when there are multiple possible answers? Oh, that's why we that's why we need that probability distribution. That's why we want sample it, right? So the idea is there could be more than one token that essentially works, right? So what if we have two tokens that have, you know, an equivalent, you know, ish probability, they're never gonna be exactly, but like equivalent ish probability, this is the benefit of sampling. Sometimes we'll choose one, sometimes we'll choose the other, right? So it's like synonyms in English or whatever. You know, you, you don't, you can choose whichever of the, of the words you wish to, as long as it makes sense. And using the sampling strategy and the temperature strategy helps us to better emulate that. I mean, obviously, you might be leaning more towards one than the other, depending on like the who you're talking to, or perhaps what you've said previously, right to keep things consistent. But this is the idea that when we're training, and we're using this, this logits softmax approach we actually can capture this behavior where you know two tokens are equally likely or five tokens are equally likely equally in air quotes okay yeah so another training question here people are excited for the next training event i can see wouldn't it be better to train the neural network only up to logits with argmax without using softmax part during the training? Yeah, we don't use softmax part during the training. We just use the logits. Just the logits without argmax? Just the logits. Just the logits. Okay. Because the whole thing we just discussed, right? Yeah, that's right. The distribution and- We give them a pile of labels and a pile of produced logits, and we go from there. Yeah. This last question, how do we incorporate context to make our choice on the next word, I think is what this question is asking. Yes. This is sort of a raggy question, I think. So, I mean, they're the same token, right? Our logits will be high for that token in, let's say to simplify things a lot right we're going to say that the score for that token is going to be high for either context right uh because it can fit in either context so uh you know both when we're talking about you know weight and when we're talking about cash you know uh especially we're talking about Britain in in cash right we're talking about cash, you know, especially we're talking about Britain and cash, right? We're going to see pound to score highly as a potential next token. So this is the idea, right? Like the scores per token are generated based on all that context. So that's why we need to pass in some amount of text first in order to get a score that is good for that for that next token this is why you know things like few shot uh you know many shot work in context learning works right because essentially what it's doing is it's modifying the scores of the potential next token by all this stuff that came beforehand right got it got it okay well so next up was then what what uh so what are we going to cover we're going to cover sort of like dive down deep into cross entropy get a feel for what loss is try to understand exactly how useful this is in training what it looks like and from there we're going to be able to really open up exactly everything that's kind of at the center of a lot of these more complex techniques aren't we there's a lot going on with the loss functions yes we're we're we're finally at a place where we can start talking about loss, and it makes some sense to be able to be tied back to what we actually are going to do. All right. All right. So I'm going to go ahead and wrap up. In conclusion, everybody, thank you for joining us today. We saw that the decoder stack GPT provides us with an output that is in embedding space. We then want to put that output from embedding space into natural language space. We can do that through this calculation of the logits that is going to take into account our vocabulary information from that language model head, and we can eventually choose softly, perhaps, what the next best word is. When we look at inference versus training, we can look at logits that help us do inference, and we can look at what the targets are as we slide that window to try to understand exactly what's going on at the root of training. We're going to dig into that next time as we continue our journey into loss functions and deep down into the transformer, through which we'll see a lot of interesting things, including how to optimize running all of this stuff in a compute-efficient way on your hardware and much, much more. So if you enjoyed this event definitely like and subscribe on youtube that is a wrap for our session today and if you would like to follow along with the discussion that we have actively going on in our community all the time and folks in the chat definitely know this and are part of it then please join our discord community we'd love to have you we've got an event coming up in just a few minutes here with one of our best community members. And we are weekly telling stories about some of the most successful folks. I believe we got shouted out in the chat today, Garrett, and what they're out there building, shipping, and sharing. We just kicked off our most recent version of our AI engineering bootcamp last night. it was a banger. And if you're interested in jumping into the next cohort, please reach out to me and let me know if you have any questions. I'm happy to answer for you. And finally, if you have any other feedback at all, we'd love to hear it. If you have any content you'd love to see, we'd love to know about it. And until next time, guys, we'll keep building, shipping, and sharing, and we hope you do the same. We'll see you soon. Bye, everybody. Have a great rest of your week.
The Next Token: How LLMs Predict
3,757
AI Makerspace
20240530
Join in to learn about the foundational aspects of prompt engineering, retrieval augmented generation, fine-tuning, and agents, to exploring the technical nuances of LLM operations like prompt tuning and the intricacies of token prediction, this event is your gateway to mastering LLM application building. Discover how to effectively manage applications in production, optimize performance, and intelligently evaluate outputs. Whether you're an AI Engineer, a leader in the field, or simply keen on the latest AI technologies, this session promises a comprehensive breakdown of both the theoretical and practical aspects of modern LLMs. Don't miss the chance to expand your understanding from the fundamental mechanisms of token prediction to the advanced strategies of unsupervised pretraining and beyond. Click now to join our "Everything is Loss" series and start unraveling the complex yet fascinating world of LLMs! Event page: https://lu.ma/nextoken Have a question for a speaker? Drop them here: https://app.sli.do/event/1FixiyoBRqad346PixnFcc Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 For team leaders, check out! https://aimakerspace.io/gen-ai-upskilling-for-teams/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA How'd we do? Share your feedback and suggestions for future events. https://forms.gle/WmVKUE3bfAoi1KDS8
2024-06-13T08:06:42.354116
https://www.youtube.com/live/EeZIKQmWSXg
Hey, whiz. Hey Wiz, so if I'm a super beginner trying to get into fine-tuning, should I use Hugging Face and Peth Library or should I maybe pick up Mistral Fine-Tune instead? Hugging Face is probably great, yeah. So is it like a fundamentally different method that is being used for fine tuning between like a peft laura and the approach we'll see today and mr fine tune no no it's the same same thing under the hood yeah same same okay okay so is it a quote lightweight code base that enables quote memory efficient and performant fine tuning on mistral models at least yes absolutely it's that yes is hugging face also a lightweight code base that enables memory efficient and performant fine tuning on mr the light the lightweight we can quibble about for sure okay but the But the rest of it, absolutely yes. Okay, okay, okay. But it does the thing. It did the fine tuning, right? It did, yes. Okay, okay. So we're going to sort of try to assess today if this thing provided a, quote, simple guided entry point to fine tune mistral models. And, of course, we can quibble about simple and guided, but it did the thing today, right? It did the thing. So, you know, it does the thing that it says on the 10 and here we are folks, another day, another tool. Welcome to the open source LLM edge, everybody. We're going to dive in and get to the bottom of the concepts and code behind Mistral FineTune. I'm Dr. Greg, that's the whiz, and we are co-founders of AI Makerspace. We're excited to dive into this new tool, and by the end of today, you'll sort of recall what powers and underlies fine-tuning throughout the industry, not just open source tools, but even a lot of the closed source tools that you might have your hands on today. Of course, if you have questions along the way, please use the Slido. We will get to questions probably throughout this event. This is going to be kind of a discussion heavy one. So keep the questions coming in the Slido. And also if you've got questions that are super relevant to the discussion we're having at the moment, YouTube live. All right, everybody, let's get into it today. We're going to go ahead and kick off fine tuning. Mistral 7B with Mistral Fine Tune. And aligning ourselves to today, we want to really make sure that we understand the legend, Laura, that's at the core of all of the fine-tuning that we see. We want to understand how to use Mistral FineTune. We're going to show you how to do that. We're going to do some instruct tuning with it. And we want to compare and contrast what we saw with this new library to what we're comfortable with, what we're used to with Hugging Face's parameter efficient fine tuning library and methods like LoRa and QLoRa. So we'll start with a review and then we'll dive into what we're seeing from Mistral Fine Tune, talk about Hugging Face versus Mistral FineTune. Do some fine-tuning and we'll again discuss throughout. So everybody, Laura, let's review here. First off, fine-tuning. What are we talking about? Well, we're talking about modifying modifying the behavior of an LLM by updating the weights of the neural network, the weights of the transformer. And full fine-tuning, it means updating all of the weights. But full fine-tuning because these things are so large is often quite infeasible for the average Joe for the GPU poor out there like we are and like we know many of you are and so we need a better way and the better way that the industry has really adopted is low-rank adaptation. And this is now not full fine-tuning, but rather fine-tuning only part of the neural network, part of the transformer, and using a factorized matrix approach to do so. Let's recall back to the OG paper here. October 2021 light years ago, quote from the abstract, as we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Absolutely classic. Hence, we propose LoRa, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the transformer architecture, meaning each attention layer within each decoder block, thus greatly reducing the number of trainable parameters for downstream tasks. Okay, hold on, say what? Freezes the pre-trained model weights and injects trainable rank decomposition matrices. Hold that thought. We're going to do some stacking and then we'll discuss. Mistral FineTune, just released, says, Mistral FineTune is a lightweight code base, memory efficient, performant fine tuning. It is based on LoRa, a training paradigm where, quote, most weights are frozen and only 1% to 2% additional weights in the form of low-rank matrix perturbations are trained. Low-rank matrix perturbations. Okay, so we've got training paradigm, 1% to 2% additional weights in the form of low rank matrix perturbations. That's how Mistral is talking about it today in May, 2024. And the guys from Microsoft that wrote the Laura paper talking about it in 2021 said freezes the pre-trained model weights and injects trainable rank decomposition matrices. Okay, so let's sort of cover a little bit of terminology before our discussion here. One of the things that really inspired the authors of the LoRa paper was a paper written in December of 2020 called Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. And so here we can see a quote from this paper. Common pre-trained models have a low intrinsic dimension. So there exists here a re-parameterization that is effective for full fine-tuning and can be as effective as the full parameter space. So this whole idea of this re-parameterization, that is where LoRa comes in. And of course, we're using a low rank adaptation approach. So it's important to understand the idea of matrix rank. The easy way to sort of understand this is to think about a very simple problem where we have a bunch of columns in our data set, and we're thinking about having a number of linearly independent columns. This idea is very common for anybody that studied matrix algebra. And so we can kind of think of how many features, how many columns are actually giving new information in sort of a broad way. We can sort of contextualize this idea of rank. How much of the information is actually important to, let's say, pay attention. And when we think about another classic matrix algebra principle technique that's used here, it's just decomposition. So we're decomposing a problem into its constituent parts, thereby breaking a difficult computation into simpler tasks. So all of this taken together from the idea of an intrinsic dimension, to the idea of low rank, to the idea of matrix decomposition, to the idea of trainable injected decomposition matrices to low rank matrix perturbations. We're going to sort of wrap all this together in a discussion now. I'd like to invite the Wiz back up to talk through this. Wiz, I think you did a pretty cool video on this idea of Laura quite some time ago. And I wonder if you can kind of just give us your overview, your thoughts, like with the diagram that gets shown in everybody's presentation it's the one on the first page of the paper as you look at this maybe you can walk us through what you see and you know maybe define some of these letters along the way for us yeah so i mean basically this is just a very fancy way to say that you you know, as we train our model, right, we can represent, so think of it this way. Our model has two, two real quote unquote real components, right? One is this idea of a, uh, you know, base weight matrix. And the other is this idea of a, you know, update weight matrix. Now, typically these are not like, we don't need to pull these apart. And in fact, we wouldn't because it adds a lot of, you know, overhead where we have to add them back together and everything like this. But the idea is that because we can represent our weight updates as a separate update matrix, right? And then we can lock in those base pre-trained weights, we can then take that delta matrix, right? And represent it in this low-rank, you know, product matrix form. We have these two matrices that will give us our actual, you know, weight update matrix. So the key insight here is that the base model weights are different than the final fine-tuned weights, and that difference is some delta weight, right? And we can represent that delta weight with this low-rank form. And the idea is we're going to pay computational overhead for this because we have to keep factoring together these matrices and then adding them back. But it's worthwhile to spend that little bit of extra compute in order to save a massive amount of required GPU memory. So while the training is the fine tuning is is is slower, we're adding latency to our training. Right. We massively reduce the amount of actual parameters that we need to hold in memory, which means we can train these models on much smaller than previously used, you know, hardware. And that's the crux of it, right? By training A and B and then factoring them together and adding them back to that base weight matrix, what we're really doing is we're figuring out what's the best, you know, form for A and B that results in weight updates that make us good at our task. So there you go. Okay. Okay. So it's really important to understand then that this is actually only important during training. Is that right? Where we're sort of actively updating the weights. So that's a great thing that you've mentioned. So no, well, kind of. So the fact that we can as a low-rank form means that they are very lightweight, and it means that, you know, well, if we can add these quickly to our base weights, you know, then, well, at inference time, actually, we can just add whatever one we want. So say we had Mistral, for example, and we fine-tuned it to do summarization, right? Well, we'd have like an adapter, right, which is just the LoRa weights that we could apply to that base model to make it good at summarization. But let's say we also fine-tuned it on a math task or a, you know, translation task. Well, at time of inference, we can choose which adapter to use. So it is very important for training, but we can also leverage it in order to make inference more quote unquote powerful. Okay. Okay. Yeah. So we can swap out these low rank adapters at inference, but what we're doing during training is we're essentially like like plugging in an empty adapter and sort of uh training it we're calibrating it to the sort of uh thing we want it to be able to do right i mean this is ultimately when we're done training do we have then an adapter or do we have like a model that's fine-tuned? So because we kept our base model frozen, we never actually touched it, right? We still have the base model. It still exists, but we also have this artifact that we've created that we commonly refer to as an adapter, right? Which is just the LoRa weights. And now as long as that base model doesn't change, we can carry those adapters around and then use them like a bit in a drill, right? Whenever we have that base model, we can use those adapters. So it's important to understand in that, exactly, as long as the drill is the same or the base model is the same, we can use that bit or adapter anywhere. We don't have to save the base model every time. We can keep downloading it, we can download it when we need it, yada, yada. We can move this all around. It's fine. But the base model has to be literally exactly the same, right? Or else the bit won't fit. Ah, yes, yes. Okay. It's got to be the same drill, right? Yes. Yes. Okay. It's gotta be the same drill, right? Yes. Exactly. Yes. Yes. Yes. Okay. So, or at least like the same little claw on the end of the drill. So, okay. So then there's this difference in language between if you read the paper and if you read the Mistral FineTune thing. Can you talk a little bit about this trainable rank decomposition matrix versus matrix perturbations idea why are we getting this sort of um differentiation in language now is where's the perturbation idea coming from exactly it's just a difference in language i mean it's the same it means the same thing so it's not something separate. When we're training a weight matrix, right, we are perturbing the weight matrix. So when we, when we update our weights, we are wiggling it about, right? Or, you know, a fancier way to say wiggling it about, of course, is just to perturb. Perturb, yes. per turn per turn yes but there's no difference the language is just uh fancier you know it's it's it's got more college words in it so it's talking about that delta w that little change in weights that we're then sort of decomposing into a times b matrix here and so um so then just you know as we sort of think about the other couple letters on this chart here. Okay. Yeah. So I've got X by D. And can you tell me what these dimensions are and why they're relevant here? And then H as well. Yeah. So X is just the, basically when we're talking about, so, okay. The idea is that we have some base matrix and that base matrix is some you know d by d matrix our initial input is x and then our changed input is h right so all that this is saying is we have some d by d matrix which is represented on the left by that big blue square we turn that into a d by r r by d matrix set and then we concatenate those resultant matrices so we do we we get the product matrix of of a and b and then we concatenate it with or we just you know plus it to uh our pre-trained weights which are in form d by d and of course thanks to the way that the matrix math works out r by d times d by r is going to result in a matrix of size d by d so their sizes all match up x is our input and then h is going to be our output from this process X is our input, and then H is going to be our output from this process. All right, all right. So X is basically heading into the transformer. That's what sort of X represents, this X by D. It's sort of the embedded and positionally encoded information, and it's flowing through the block. And then H is sort of the output of, is this the attention layer or is this the output of the entire block here? So this is actually pretty interesting. So we can use LoRa where, where so ever there is a matrix. It doesn't have to be just the attention mechanism. It can be in the MLPs. It can be anywhere that there's a big matrix that we don't want to be big, but instead wants to be small. So, you know, in the initial Laura paper, we did see that we only applied it to specific subsets of the weights in the model. Specifically, we applied it to QV, I believe, if I'm remembering correctly, but we applied to it only some layers. Now, we're very judicious. We apply it to basically everything, right? Where we're going to apply it to the MLPs, which is the feed forward network. We're going to apply it everywhere we can. Right. In fact, with things like QLora we found, that's actually even better. It results in better models at the end of the day. But the idea is this is, Lora is not a process that's unique to attention, that's unique to, you know, specific layers in the transformer architecture. you know, specific layers in the transformer architecture. Now it is it's useful because the transformer architecture is specifically large language models are so huge and they have this property of intrinsic load dimension. So we can't just use this in like any old matrix. But for transformer matrices, yeah, we can just we apply it pretty judiciously. We just slap it in it pretty judiciously. We just slap it in there. Okay. Okay. And, and I mean, let's go back to like the whole name, right? And we say lower, lower, lower, but it's low rank adaptation. So it really is just sort of technique that can kind of now even be applied much more broadly than we thought in the initial paper. Is that right? I would say probably the application space is the same. Large language models is where we're going to see this the most and in other kind of like larger uh larger models where we've trained things so much that they have this property uh you know the matrices are so huge the data is so plentiful but uh yes it is it's it's a specific the way we apply it has evolved or what we apply it to within that system has evolved even if the actual you know crux of the application is the same which is that's useful for lms it's not very useful like you know for your for your smaller networks or for like uh you know uh things like really small bert you know we're not gonna be thinking about this too much okay okay okay yeah because it's all about reducing the number of trainable parameters and like if we've got a consumer grade gpu and we can do like a relatively complete fine tuning on a pretty small model we'll just do that we don't need laura right it's it's really all about making sure that it aligns to the gpu power of the consumer today, for the GPU poor of us out there, right? All right. Sounds good. Thanks, Wiz. We'll come back to you to show us how to do Mistral FineTune. And speaking of Mistral FineTune, let's take a look a little bit closer at the specific library here. So what we can see with Mistral FineTune is it is this lightweight code base based on lower end, blah, blah, blah. Now for maximum efficiency, it's recommended that you use a big daddy GPU. The code base is optimized for these kinds of things, but for smaller models, we can use a single GPU. And then that's kind of the way we're gonna show the fine tuning today now they did provide a note here on the repo that the goal is to provide a quote simple guided entry point to fine-tune Mistral models this is what we're trying to test out today we'll see what you guys think as well did they achieve achieve their goal yet, or is there still work to do with Mistral FineTune? So they walk us through a number of methods that they can use for fine-tuning, a number of types of fine-tuning, specifically at first in the repo. They say, well, you can do pre-training that has sort of continued pre-training. You can do instruction tuning and you can do function calling. Now, these are all fine tuning. OK, so pre-training is fine tuning, continued pre-training. Instruction tuning is fine tuning. Tuning for function calling is fine tuning. And again, they're all using the LoRa approach under the hood. Now to sort of get this done, it's very simple order of operations, similar to what you would see in any other fine tuning library, prepare the data set, verify it, start the training, make sure your config is right, and then test it out by doing inference. Now they they did also sort of note, hey, you can easily plug in 1B to this. And we went ahead and did that today because, you know, why not? Let's try out all the features and goodies here. When we looked at 1B, we were specifically looking at training loss, evaluation loss, and evaluation perplexity. Although there's a number of other things that Wiz will show you is kind of available if you're linked up to 1B to look inside the black box as you're training. Okay. Now, when we think about loss, remember, remember everybody, like, you know, how do we calculate loss? Well, we're going to use cross entropy. Now to go much deeper on cross entropy, you know, join us again next week when we're talking logits and loss. We're going to go back down deep into the transformer and we're going to talk about how to exactly do these sort of weight updates during training associated with the loss function now the other thing that Mistral FineTune allows you to do and this is sort of an open question is this super valuable or not is it allows you to leverage their mixtural models the mixture mixture of experts models. And this is directly from the repo here. A best practice for mixtural models is that they're like, man, you really should train mixtural models a couple of times independently, because depending on the seed that you use during training you can actually get a really really high degree of variance between instantiations of fine-tuning of mixtural models and I've got a quick discussion point here that I want to bring whiz back up for just in terms of the mixtural Is there a reason why we're not fine tuning mixed role today was it seems like it's cooler. It's newer. Is it like harder or something to do this? What's the deal? It's not harder. It's just it I mean, in fact, it's the same. It's just, you know, it's just fine tuning. Nothing, nothing changes. But the mixed role models do, you know, they have a GPU requirement that exceeds the possibilities of the CoLab environment. So, you know, remember, Mixtral doesn't require a ton of active weights for inference, but it does require a lot of weights to be loaded in GPU memory, right? So even though when we're doing inference, we're not touching all those weights, we need to be able to in order to have all of the correct paths available to us through that model, which requires us to have a larger GPU memory capacity, even if we're not going to be using as many as we're doing inference. The inference is still zippy, still fast, but we have to have that capacity to hold the big model and all those available paths in it. That's right. That's right. And as we said before, you can use LoRa on not just the attention layer, but you can also use LoRa on, like you mentioned, the feed forward layers. And for everybody sort of trying to recall exactly what Mistral kind of looks like and how it's sort of different, you know, from an architectural perspective, that feed forward network layer is replaced with the sparse mixture of experts layer, right? So you're saying you kind of have to hold each of these kind of mini neural networks here, feed forward network one, two, three, et cetera, et cetera. You got to hold all of this in memory even if you use injectable trainable low rank decomposition matrices you still have to hold all this there and and that makes it more computationally intensive and remember we not only have to have those low rank decomposed matrices we also need to have those those base matrices those big honking uh frozen weights which are going to take up all of our capacity right so it's a the the adapters take up very little space thankfully but we gotta load all of this into memory so that every path is available right like it's like if we imagine that each of these, you know, feed forwards is the equivalent of like a door, right? We have to have all the doors available to us, even if we're not going to go through all of them every time, because we might need to get to a different room the next time we go through, right? So we have to have them all there, even though we're not going to use them every time we do we do any kind of uh forward pass um okay yeah yeah yeah makes a lot of sense okay so literally like the more experts you have the more compute you you're just you're forced to use even if you're fine-tuning even with laura even if you're forced to use even if you're fine tuning even with laura even if you're quantizing it just scales out with the number of experts that's right okay all right very cool all right then uh we're gonna set this up so we're get just about ready to rock and roll into the demo today guys instruction tuning with mistral 7b is going to be based on first of all some instruction tuning data that we've grabbed off of the shelf and we're just going to use the dolly 15k data set so this is available directly on hugging face this is sort of a classic data set that's got a lot of different categories of instructions closed question answer, classification, open QA, information extraction, et cetera, et cetera. And so it's sort of a broad perspective view. Now, we're not going to use all 15,000 data points for fine-tuning, and we're just going to do a few hundred iterations. But this will give us a feel for what the difference is between the model that we use, the base model, and how well it does with our instructions after we fine tune it. Now, we're going to use Mistral 7B Base V3. The only difference between V2 and V3 is like so many models today, that sweet, sweet long context window. So it's up to 32K, 32, 768 to be exact. And that's the real distinction from the V2. So with that, I'm going to pass it off to the Wiz to show us how to go through Mistral Fine Tune to do some instruction tuning on Mistral 7B. Take it away, man. Yes. Okay. So this is pretty straightforward. Thanks to this library. However, it does require, you know, we'll talk about it. So first thing we got to do is grab some dependencies, pretty standard stuff. So, we're going to go ahead and we're going to grab our menstrual fine tune, which is the repository, which can be found here. The repository has great instructions. It has a tutorial that doesn't work currently, though I'm sure they'll update it. And the basic idea here is pretty straightforward, right? We need to get the model, do some stuff. We're going to walk through the notebook. So we'll get the repository, we'll cd into it, and we'll install all the requirements that we need. Easy peasy. You can ignore these dependency conflicts in the Colab environment, not worried about it. Then we need to download the model. We're going to download the Mistral 7B v0.3. As Greg said, this is a long context model. However, you know, keep in mind that because we're doing this in a co-lab environment, we're not going to be taking advantage of the long context. You know, it's just not possible to do in the Colab environment, so we're not going to do it. If you're using the recommended equipment, which is a, you know, a node of GPUs, you're going to be fine. But the idea is that we're going to use this 7B v0.3, still a great model, we love to see it. And then we're going to extract that model into a Mistral models folder. Easy. Now, the next step that we have to think about is we have to think about formatting our data set into the correct format. We're going to do instruction tuning. So we're going to follow the instruction tuning guidelines that they give us in their repository. As you can see, the notebook kind of just, you know, this is a JSONL file that we need with, you know, this key messages, which has a list of messages. The messages need to have a role in content, right? And this is very typical if you've seen fine tuning before where we have this role in system, we have this content in the system prompt, and then we have our role user with their content user prompt, and then our role assistant with the content response. And that's it, right? This is a pretty classic example of fine-tuning. And we, you know, it's easy enough to create this JSONL file. You do have to make sure that your data is in this specific format. So it is important that you've contorted things into this format, or else you will not find success, unfortunately. Now, we're going to be using some data from the limit, less is more for instruction tuning. We're specifically going to be using Instruct V1, aka Dolly HHRLHF. And this is the data set that we're going to be using today. It's a fairly standard data set, pretty classic, right? From back in the day, it feels like. The idea is we have some instructions, we have some responses, and we're going to train the model to get good at following that instruction task. And that's the idea. Okay, so in order to do this, we're gonna first just create a data directory to shove all our data into. We're gonna cheat a little bit here. We're gonna use Huggy Face Hub. Instead of just pandas, Huggy Face Hub is just easy, easy to use, right? The dataset format is familiar and great. We're gonna go ahead and use our notebook login because if you're using this dataset, it might require accepting a EULA. And in order to make sure we've done that, we'll need to prove that we are who we say we are on Hugging Face. Then we're gonna load our dataset, which is from Mosaic ML, Dolly HHRLHF. It's a very fun time. The, you know, the best part of this, uh, you know, Dolly H H R L H F, uh, data set is that it's simple, straightforward. So it's easy to contort it into what we need it to be. As you can see, it's not in a, uh, format that, uh, you know, uh, Mistral expects currently. It's in fact's in fact definitely not in that format right so we have to contort it we're going to create a simple formatting function that does that we're going to create the expected format in a column called data where we have our messages which is a list of messages that contain key role with the role and key content with the content. And away we go, easy peasy. We're just going to make sure that our formatting function works. We're going to do that by testing it on one of the samples. And we're going to go to our messages. We have system, blows instruction, designs, perfect. And then our user, what is Kangen water? And then we have this explanation. Very cool, very cool very cool okay so we map our mistral fine-tune format function over the entire data set training and test we can see now that we have this data response with about 60,000 prompts and then we have our test set with about 5k prompts nice and good we're going to save those as json l files since that's what the menstrual fine tune library currently expects and we can just write these out we're going to dump the data into that json l file and separate them with new lines that's where the json l comes from right json lines so every line is a new JSON object. And we can do that for our test as well. We're going to call our test eval because we're not actually going to do testing with it. We're going to evaluate during training with it, which is always fun. But it's just called test by default in datasets. So we're going to leave it there. Now we need to verify the dataset. And we're going to leave it there. Now we need to verify the dataset and we're going to enter into what I believe is the current kind of, I guess it would be, I would call it a shortfall of this particular library in its current form, right? So we're going to run these reformat datas. First of they, they error silently for the most part. So if you're not, your data is not in the correct format, they might just not say anything. Uh, if your data is in a recognizable format that doesn't work, then they will complain, which is what we want. That's ideal. Um, and they do try to reformat, but as they call it in the repo, right? If you have some exotic data, this isn't going to do it, right? You need to kind of do the work to get the format into the shape that is desired by the library. This is not new or specific. You know, it's not specific to Mistral FineTune. Now, the next piece is our training. It's our YAML file. So instead of using kind of, you know, like those long args lists or, you know, a bunch of parameters, we're going to use this idea of a YAML file. And the YAML file is going to dictate everything. The YAML file is going to dictate everything. So first of all, if we look at their repository and we look at all the different cool hyperparameters we have, sorry for all of the training, but we have a bunch of cool hyperparameters, right? We've got all kinds of stuff. Checkpoint frequency, we've got log frequency, we've got rank, we got it all, right? We're going to express all of this in this.yaml. at all right um we're going to express all this in this dot yaml now it it's not necessarily like the best uh thing in the world but it works and that's what we want so first of all we need to set up the data part of our yaml file which we're just going to pass in our data a header and then we're going to give a instruct data an eval instruct data tag that we pass our uh you know the paths to our training and eval data easy peasy then we go to our model path for our model id or path which is just going to point to the downloaded model that we created then we're going to create some hyper parameters classic hyper parameters we've got lower rank, sequence length, batch size, micro batches, max steps, learning rate, weight. It's got it all, right? Well, it doesn't have it all, but it has a lot of things. And this is one of the limitations of this particular strategy. It doesn't have it all, right? If we look at the actual kind of options that we currently have available to us, it's not everything that we're used to if we're coming from another library. However, it does make sense, and it works, and that's great. Now, you'll notice that the sequence length being used here is 4K. This is because we have a limited amount of GPU memory. We want to keep it relatively low. So where we might be able to get away with something higher in the 7 to 8k range, we're just going to keep it at 4k to make sure that we're not blowing through our memory. Our LoRa rank is going to be 64, you know, dealer's choice. We just can't make it too high or else we'll run out of memory and of course we're only going to do this for 300 steps so we're not going to fully train on our data set that would take a very long time we're going to start a learning rate rather rather high and then we're going to decay it you know at a pretty pretty standard rate i think from the chinchilla paper and then we'll put our output directory to this content slash limit test. And then we just have to convert this to YAML format. So we do that here. You'll also notice we have some other parameters that we can set like seed, how often do we log? How often do we eval? You know, are we going to do eval or not? How often should we save a checkpoint? And then save adapters. So remember that because we're doing this adapter fine-tuning, we need to be able to save those adapters periodically, right? So we're not actually training the model. It's very, it's silly to say because we're definitely training the model, right? But we're actually training these adapters, and the adapters modify the model. And so this is the idea, right? We want to save those adapters, or those two broken out matrices, we want to save those as we're going through training, right? And then our run directory is just going to be where we save this run. We're also going to integrate weights and biases, like Greg said. Weights and biases is an easy integration, which is we just have to, you know, provide these options. Our mistral fine tune is what we're going to call the project. The run name is going to be dolly instruct. We're going to provide our API key, and then we're going to write these things to our YAML. We're going to use offline equal to false. You know, there you go. Then we're going to save all of that out to a YAML file. And we can then use that YAML file to validate our data. And what happens here is that there's a script that validates all of our data. Data is correctly formatted. Stats for the data. We get all this cool stuff. It also gives us, in a very fun way, an ETA. So how long this might take, right? Which is pretty cool. And you love that. So we validate the test. We see that everything's no errors. We get no errors twice in a row. No errors twice in a row probably means that there's no errors, which is always ideal. So now that we've done this, we can go ahead and start our training. Training is very straightforward. We just need to make sure because we're in Colab that we provide these additional environment variables so that we target the GPU in our Colab environment. And then we're going to make sure there's nothing in the test folder. And then we're going to run our torch run with our training script from mistral fine tune and then we're going to go ahead and point to that same yaml that we just created above that we use to validate our data so that's great we love to see that i see a question in the chat what do you number of steps mean in this context that's just the number of iterations that we're going to run through so uh you know when it says our our file here right we're doing sequence like uh with our batch size and number of micro batches so a number of steps is going to be the number of times that we repeat uh an iteration on a batch which contains eight micro batches so that's the idea you can see that it's currently training now. We train it beforehand and then we're doing another one just to show off the wand B integration. Very cool. So let's look at wand B. Wand B, as you can see, that's from the completed run. This is from the run that's currently ongoing. So you can see that we have a bunch of different interesting things being tracked. And if we look at something like our training loss, we can see that we have this slowly declining training loss, but it's very noisy. Our learning rate is being decayed, as we would expect. And then we actually just finished a eval in, we'll do many more of. So how will this look at the end? Well, this is an example of the completed run where we have all 300 of our steps. You can see that our perplexity goes down, our training loss, our evaluated training loss goes down, and our eval loss goes down. This is the expectation, of course. As we train, loss go down, a very classic example, right? This is the idea with the 1B integration. This is all just done for us. We don't got to do anything. You love that. So now that we're done training the model, what we have to do? Well, we've got to go ahead and use the model, right? So we're going to go ahead and use menstrual inference to do this. Menstrual inference is menstrual's version of, you know, how to do inference with the Mistral models, unsurprisingly. We're going to go ahead and we're going to load our tokenizer from the downloaded model. We're going to load our model from the downloaded model. And remember, right, the model is the same. We just need those adapters. So then we load our LoRa adapters from our training directory and then we can send it a request very similar to how we would open AI very very convenient and then we're going to tokenize our request generate request request and print some results you can see that our results are very straightforward machine learning is a subset of artificial intelligence allows computers to learn from data without being especially programmed i mean it's great right it does the thing it follows the instruction the instruction was to explain machine learning to me in a nutshell so it did great uh and that is mistral fine tune a library that helps us fine tune mistral models uh don't forget to like comment subscribe hit subscribe, hit the bell. It helps us out. We're here every Wednesday, having a great time talking about cool stuff. So thanks. I'll pass you guys back to Greg. Thanks, Wiz. So that's Mr. Fine Tune. And the last thing that we'd like to sort of point out is, so, you know, how are these two things different? We'll kind of kick off our discussion here. Let's remind ourselves that full fine-tuning, the sort of problem with it, is that it's really not cool if you're GPU poor. And so the Hugging Face libraries use these parameter efficient fine-tuning methods that are just simply better than full fine-tuning. Same problem, right, that it's trying to solve. The number one PEFT method is LoRa. That's the one you should know. And if you're a beginner, as we mentioned in the beginning, you should probably still start there. But Mistral FineTune does do the thing. Their CDN, their content delivery network, is rather slow. It took nearly an hour, maybe 45 minutes to download the model. Their opinionated data formatting is going to give you some potential issues if you have complex data formatting and remember mixtral is simply a more computer intensive thing to deal with not to mention you need to do multiple runs because of the nature of the way the mixtral models work aligning with their best practices in the repo and And then LoRa just sits at the bottom of everything. You can do it on attention layers. You can do it on multi-layer perceptrons, feedforward networks. You can do it at inference. You can plug in the adapter. You can plug in the empty adapter and calibrate it during fine tuning. And so make sure that you do have a handle on the concepts beneath the code. And that is Laura. To kick off QA, I'd like to go ahead and invite Wiz back up to the stage. One more little discussion point. So as we think about Hugging Face versus Mistral Fine Tune, what jumps out to you as similarities and differences that people should keep in mind? Yeah, I mean, they're both used for fine-tuning models. They both will fine-tune models, so you can fine-tune models with both. You'll love to see that. Otherwise, I mean, the differences are quite superficial. It's doing the same stuff under the head. superficial is doing the same stuff under the head. Transformers had a long time, right? To, to, to polish this out, to build things that, that work exactly the way you expect and have all of the bells and whistles we've come to love about that, that kind of a library, right? And, and Mistral is just getting started. So I imagine that over time, you know, this Mistral fine tune will evolve into a solution that makes a lot of sense and is quite useful. For the time being, I think, you know, they're on the path. It's a good first couple steps in that direction, but the ease of use is just not there yet, in my opinion. Okay. All right. Yeah, it takes a long time to create really, really clean, user-friendly products. And, you know, Mistral's putting out a bunch of stuff these days. Look forward to seeing what they continue to put out as a true competitor to OpenAI, it seems, across the sea. All right. So we are going to get started with Q&A. We got a solid 10 minutes, everybody. We've got a QR code up in the top of your screen that you can add questions to and upvote questions you like best. I'm going to go ahead and kick it off with the first upvoted question today. Can we combine these adapters? I mean, training one to program another for medical and combine together? Let's just sort of talk about combining adapters, I guess. Yeah, I mean, you can model merging exists and basically is that, uh, so yes, the answer is just a simple yes. Um, we, we can't do that. Yeah. Yeah. And model merging is basically like add them together, right? Like, uh, these perturbations, let's say these injectable rank, injectable low rank decomposition matrix. Perturbations to the weights. That's what we're sort of adding together when we do the model merging. And we do have some model merging material that we've gone through recently with the creator and with RC on our YouTube channel. Check that out. Next question. So can we invent a multi-adapter instead of multimodal? How does multi-adapter fit into multimodal? And I think this is sort of a different question here, baked in here, Rami, and have one adapter or have one adapter as a router. Maybe we'll take those one at a time. So multi-adapter for multimodal. Yeah. So probably won't be for multimodal. It's not like each adapter will handle a separate modality, but it is the case that we can create a multi adapter system instead of multiple models. Um, but, uh, in terms of like getting a vision model or like a audio model as an adapter to like a language model, it's not going to work. We need to have that image modality or language modality as part of the model already. Um, uh, and then having one adapter as a router, having one model that we build a router for that uses the right adapter. Yeah, sure, absolutely. That's possible. We might use like a simple classification model on the head of it in order to route it to the correct adapter. But that's still a space that's being explored very much by people well i mean that kind of that kind of you know reminds me of the idea that within the repo we have the function calling capability and of course when we talk about fine tuning for function calling, we can very easily sort of imagine a router being used in a more agentic way, right? And so I think one of the key things that I want everybody to kind of take away that maybe isn't obvious to everybody is that function calling is just another form of fine tuning it just requires what a more specific formatting whiz that's basically it that's it yeah okay all right so uh what's the best gpu to buy uh that's a here's a good one for you liz uh what's the best gpu for small scale industrial application industrial application 4090 just get a 4090 uh you know it's a great it's a great card 3090 will also do 390 ti i think is the 24 gig uh card uh you don't need to spend you know like enough for like a to a6000 you don't need to so So yeah, basically just accumulate cards that have 24 gigabytes of GPU RAM, and whatever flavor is cheapest to you and then go from there. And just stack them together till you have your little 24 gig card cluster. Okay, so Don, Don asks, isn't YAML less likely to handle data format issues well compared to JSON? So we're only using the YAML for like the configuration file. Everything else is in JSON or JSONL and like the data is held in JSONL. We're just using YAML as the config. But yeah, it's just like a choice. YAML, config, name a more iconic duo. I can't. I can't name one. Yeah, yeah. Okay, can we do this without 1B? I know the answer to that. Yes, you can. It's definitely optional. Any other comments on that? Would you like 1B? Yeah, 1B is great. It's free. It works. The, the real, the, the, the real thing to say is like, you should just use one B because it's free and it's great. Yeah. It's funny. Cause we were having the same discussion in class last night. Like, why should we use one B? Why oneb? It's like, I think that's a good enough reason. Yeah, it's free and it's great. Okay, another question from Rami here. Any guide sessions or scripts to prepare and test a standard data set for LAMA, Mistral, or other LLM fine tuning data set formats? I think this is a data set formatting question. I think this is a dataset formatting question. I think probably point you to specific fine-tuning events. We've got a fine-tuning playlist that, you know, if we did Llama, you got to put in Llama format. If we did Mistral, got to put it in Mistral format. We've done other ones like OMO and, you know, a few others as well. I would check out our fine tuning playlist. Anything else to add there, Wiz? No, I think that's a great place to start. It's just a lot of like reading and working, but you'll get it quickly. If we thought like a dataset formatting event would slap, we would do one. I just, I'm'm not this is the first time i've heard that feedback if you guys want it let us know we'll put it together how does the choice of optimizer like adam stochastic gradient descent impact the performance of a fine-tuned laura model is there like a right answer for optimizer? The right answer is Atom or a version of Atom. Just straight up, that's what everyone will use, does use. They have like paged versions. They have fused versions. They have all kinds of fun kernel optimizations that make it very zippy and very quick. So Atom is basically where we're at. Hmm. The here's an interesting question. So since we brought up this attention layer versus MLP layer fine tuning, which one's better? Which one should I do? fine tune attention layer or fine tune MLP? Why do it all? Would you? Yeah, I mean, I guess. You know, target either or if you really wanted to but uh like intuitively like attention feels like the place to start but i don't know we we'll do all of it just because it's it's recommended thing to do right it's uh it's easiest and it's the lowest uh memory and we're gonna be fine to be very clear we're gonna be fine tuning those layers no matter what it's just whether or not we're doing full fine tuning or laura adapter fine tuning that's different but they're they're gonna get fine tuned uh it's adapter fine tuning that's different but they're they're gonna get fine-tuned uh it's we have to so there you go boom there it is that's a great place to wrap up thanks wiz for showing us mr fine tune today and that brings us to the end of today's event don't forget to like and subscribe and ring that bell if you like this session and you're not in our Discord yet, you should definitely join. We got great community vibes going. And I'd really love to see folks that join also consider building and deploying their first ever LLM application. This is the Beyond Chat GPT event that we put together a while ago now at this point. And it's something that we require for everybody that takes our AI engineering boot camp. So if you're up for a challenge, I would encourage you to see if you can build and deploy your first LLM and share it in Discord in the Build Ship Share channel. There's a ton of awesome activity going on all the time with folks building their very first application. Now, if you really want to accelerate your AI engineering learning, you might check out our AI engineering bootcamp. We've got a lot of great, cool, fun, interesting announcements coming soon. We just launched cohort three, cohort four is in August. So you can start planning for it now. If you want to learn with me, with a great group of peers, AI engineers, leaders, and many others, as well as get access to really high quality opportunities to get in front of hiring partners based on your certification. Consider this as a pathway in 2024 for you. Next week, we talk loss functions for our logits and loss event, all on training and fine tuning. We're going down deep into the transformer again. So join us for that one. And finally, provide any feedback that you have. We take it seriously and we try to improve all the time. As always, in the meantime, we will do our best to keep building, shipping, and sharing. And we hope that you do the same. Thanks, everybody. Have a great rest of your week, and we'll see you all real soon. Bye, guys.
Fine-Tuning Mistral 7B with Mistral-finetune
3,635
AI Makerspace
20240606
Join us for an in-depth exploration of Mistral's new fine-tuning library for LLMs! We’ll dive into the world of Parameter Efficient Fine-Tuning (PEFT) with a focus on Low-Rank Adaptation (LoRA), the industry's leading method. Learn how Mistral’s tools stack up against Hugging Face's PEFT-QLoRA techniques and discover practical tips for integrating Mistral-finetune into your applications. Event page: https://bit.ly/mistralfinetune Have a question for a speaker? Drop them here: https://app.sli.do/event/599JXnpHUCmXSVbQ57on5Q Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 For team leaders, check out! https://aimakerspace.io/gen-ai-upskilling-for-teams/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA How'd we do? Share your feedback and suggestions for future events. https://forms.gle/T3qe12sBZsS5SSiH7
2024-06-13T08:12:54.946701
https://www.youtube.com/live/Anr1br0lLz8
Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.
RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1
3,842
AI Makerspace
20240207
GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively! Special thanks to LangChain and RAGAS for partnering with us on this event! Event page: https://lu.ma/theartofrag Have a question for a speaker? Drop them here: https://app.sli.do/event/2rLa8RML994YsMQt1KLrJi Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/ryzhbvxZtbvQ4BCv5
2024-06-13T08:18:27.794165
https://www.youtube.com/live/XOb-djcw6hs
Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? It sure is, Greg. Yes. And is quantization like really as good and as dope as everybody's talking about? Yes. Emphatically, yes. Emphatically, yes. Man, I cannot wait to see exactly what's going on inside. You're going to show us how to do this today, right? Sure. All right. Let's go ahead and get right into it, man. We'll see you back in just a little bit. Today, we're going to talk quantization. I'm Greg. That's Chris. We're from AI Makerspace. This is a bit of an add on to last week's event, which talked about parameter efficient fine tuning and low rank adaptation. Today, we're gonna take it to the next level and talk quantization. We'll demystify the idea of quantization, and we will also talk about how to leverage the latest in low ink adaptation which is a quantized version of it called QLORA as always we'll be collecting questions with slido so go ahead and provide your questions for us throughout the day at that link and then we'll go ahead and answer as many as we can when we're through with the demo at the end. Of course, we'll have Chris back to lead and wizard his way through the demo on quantization soon, but for now, let's cover what we need to know so that's going to make sense to us. We're going to talk quantization of LLMs today, and we're going to talk fine-tuning with LoRa. This is the main goal. We want to understand and we want to align our aim to really grokking QLoRa and then seeing how we can implement that. We got a little bit of insight into quantization last time when we were loading the model but now we want to take a look at how it can be used to fine tune and some of the background and intuition associated with why this works and what the industry has sort of learned about the precision of numbers within our llms so we're going to talk-tuning quantization QLORA, and then we'll do it. And to sort of contextualize this, similar to last time, we wanna understand that often fine-tuning is coming into play after we do prompt engineering, often after we set up a retrieval augmented generation system. And we wanna now take a look at how we can optimize our large language model, or in other words, how we want the model to act, how we want the input and output schema of the model to be a little bit more constrained, a little bit more dialed in, a little bit less large, a little bit more small. And this is sort of the trend we're noticing as 2024 is upon us now. We are seeing a bigger and bigger interest in smaller, more performant language models and fine tuning is really a key aspect that's going to help us to get there so let's just remind ourselves what we talk about when we talk about fine tuning with peft laura PEFT LoRa. And why we need to do this. You know, when we talk LLMs, they're super big. They have billions and tens of billions of parameters. It's likely we'll see models with hundreds of billions of parameters before too long. Not all models are always getting bigger, but some of them are. And the reason is, is because if we keep adding more text and more parameters, we are pretty confident that our next word prediction will continue to improve. prediction will continue to improve. But as we do this, as we build larger and larger models, as we have to deal with more and more compute in order to be able to handle them, whether that's loading them, training them, fine tuning them, or performing inference on them and serving them. We're kind of abstracting away from the regular developer, the regular individual out there that doesn't have access to a giant cluster of GPUs to be able to even play with these things. And this is the core problem, is that when we go and we want to do full fine-tuning on many, many billions of parameters, this becomes a huge pain for anybody trying to use consumer hardware, any small business trying to just use the laptops that they have, maybe a few resources on the cloud. And this is as true for fine tuning as it is for loading and storing, certainly for deploying these models. It just costs too much. And the solution for kind of dealing with the fine tuning, the storing and the deploying is kind of the same. But today we're focusing on fine tuning. Today we're focusing on fine tuning using fewer parameters. It's all about using fewer parameters. We don't need all of them as we started to get some intuition into last time. And in fact, the ones that we have, what we're going to do today is we're going to take those parameters and we're going to make them smaller in a sense. We're going to make them smaller in a computational sense. This is the essence of quantization. So while it may not be necessarily fewer parameters when we talk about quantization, although it often is when we talk about fine-tuning, we're just trying to move these big, big, big models towards smaller packages through fewer parameters and through more efficient representation of those parameters. And we saw last time, we saw that LoRa is the number one PEF method you should know. It's called low-rank adaptation. And the big idea of LoRa, as we discussed, was to fine-tune using factorized matrices. And again, we didn't focus on fine-tuning absolutely everything. We did fewer parameters. That was great because it was more efficient. And we found out that we could actually leverage LoRa adapters for many tasks. So you could have one big, big model and a ton of different lower adapters and deploy that to production. Deploy each of those adapters to production because at inference is when the adapter would actually come into play. So very, very flexible, very good technique for. Larger companies and industry, especially that want to just have many adapters and larger companies and industry, especially that want to just have many adapters in one very powerful model, we'll probably start to see this emerge as an approach to AI development in the enterprise. And, you know, it's really comparable to fine tuning, full fine tuning. full fine-tuning. So, you know, we saw, in essence, that fine-tuning is all about modifying behavior of LLMs to update parameters. Parameter-efficient fine-tuning is all about fine-tuning with fewer parameters. Low-rank adaptation was all about fine-tuning using factorized matrices. And so parameter-efficient fine-tuning through low-rank adaptation is all about modifying behavior by updating fewer parameters using factorized matrices. So this all sort of flows together. This leads us directly to our new friend, quantization. And this meme is so good, I had to put it twice, because it's such an oft misunderstood idea. Certainly has taken a long time for me personally to really try to grok this thing. So let's see if we can break it down in a way that makes sense to all of you. First off, the weights of our LLM, when we talk about weights, it's the same thing as when we talk about parameters. Okay. So parameters, I might say weights, we're still talking about parameters. Those parameters are simply numbers. They're just numbers. And specifically, they're floating point numbers, They're floating point numbers, also known as floats. And it's important to understand a little bit of the detail here, because this is the essence of what we're doing in quantization. When we talk about floats, you may harken back to your days in school, maybe chemistry, back to your days in school, maybe chemistry, where you learned about significant figures, sig figs, everybody's favorite, right? And then if you're like me, you become an engineer and you don't care anymore, ever again. But I was a mechanical engineer. If you're a computer scientist, computer engineer, maybe you continue to go deeper. And these days in AI, if you're a developer, you need to continue to go a little deeper. Because this idea of a float is cool, this integer with a fixed precision, we can talk about representing, for instance, 12.345 as 1, 2, 3, 4, 5 times 10 to the minus 3. And we can then do this by using a specific number of bits in our computer. When we talk about this precision, this fixed precision, there's a number of different types of precision. What we're going to generally be using is what's called full precision when we're doing computations that are kind of default computations. Full precision means that I have 32 bits to represent my floating point number. And they're broken up into a couple different pieces here, but the big idea is that there's 32 bits. And the question is, is that the right amount when we want to go and deal with 70 billion parameter models and things like that? And it turns out that in machine learning, we found sort of over time through experiments that if we didn't use 32-bit precision and instead we used 16-bit precision, Instead, we used 16-bit precision, essentially half precision, to again, simply represent those decimal numbers that are inside of each of the neural network, that represent each of the neural network weights, sort of each of the neural network perceptrons is a way you could think about this. Then what we're finding is that we can get almost identical inference outcomes from our LLM. Because remember, we just want the words that come out at the end. We just want the ideas that come out of that. We just want the outputs. We don't necessarily care about the precision of the stuff within the black box. We put in, we get out. And a lot of people were seeing this. A lot of researchers were seeing this with the large language models, that if we just leveraged half precision we can get very very good outcomes and what this does is this effectively halves the entire model size so what are we saying we're saying that we can sort of get exactly the same thing coming out coming out, even if we represent each of the model weights using half as much information we can think about. Because really, I mean, how many sig figs do we need? And another way we can talk about moving from a 32-bit down to a 16-bit representation is we can say we are quantizing. We quantize the 32-bit weights down to 16-bit. weights down to 16 bit. Hence quantization. Now, when it comes to quantization, there are many different approaches to quantize model weights. So, this is very important. We're not going to cover every single approach because that's not really necessary for what we want to discuss today. But there are many different ways to quantize model weights, and we hope to continue to bring you more content on ways that are a little bit different in terms of their actual implementation and the nuances in future content but for today we're going to focus and use this Q-Laura idea as a focusing lens now Q-Laura starts the story begins with a paper called 8-Bit Optimizers via Blockwise Quantization. And this was a paper that came out of the University of Washington. Tim Detmers was the lead author, and he's been quite a superstar in the field.'s he's kind of like the quantization guy and in this paper they showed that you can use 8-bit representations and maintain performance that we're seeing at a level of full precision or 32-bit. So here we see in this kind of early paper, again, one of these pieces of work where they're saying, hey, look, experimentally, we're seeing that if we reduce the precision, we can still get great results. And this is not reducing it to half precision it's reducing it to quarter precision 32 down to eight and this bits and bytes approach this bits and bytes paper turned into what became the bits and bytes library which has since evolved and is something that we'll see the Bits and Bytes library, which has since evolved and is something that we'll see Chris use today, and it's something that gets used all the time now. Now, Bits and Bytes, you can go ahead and recall that one byte is equal to eight bits. We're going to continue the discussion in bits today, but you'll see many papers and discussions of things that will talk in bytes as well. So pretty simple to understand why the library was named bits and bytes. Now, again, this is one approach. And so there are some trade-offs as there are with any approach. For instance, when we use the bits and bytes approach to quantization, we're not really getting any additional benefits to our inference latency. We're not really speeding up inference a whole lot by using this particular approach to quantization. However, what we are doing is we're leveraging a tool that gives us very flexible use of those LoRa adapters, right? So for enterprise, if we're thinking about how do I have one big model and just have a bunch of adapters, this is going to be our friend. And this is why we choose to focus on this one today. And this bits and bytes library forms the basis for what comes next. It kind of forms the basis for this QLORA idea, this efficient fine-tuning using quantization. And the fine-tuning using quantization from the QLORA paper, the big bang box takeaway of this is it's super great, even though it's eight times less precise. less precise. So what we actually have going on in QLORA is we have not an 8-bit representation, but we have a 4-bit representation. And so what completely insane. And we can fit all of that on a single 48 gig GPU, single 48 gig gpu which is like just kind of incredible it's just kind of it's kind of mind-blowing that we can do this and so this q laura paper is essentially coming and saying hey hey, listen, we've got this idea that we can do fine-tuning using a four-bit approach versus even a half-precision approach, and we get amazing results. And so this is the essence of what's going on here with QLORA. the essence of what's going on here with QLORA. And so what we can kind of think about is if we go back to this idea of PEPF-DLORA fine-tuning, where we're modifying behavior by updating fewer parameters using factorized matrices. And we add this idea of quantization, where quantization is simply representing high precision numbers with low precision. Then we get to this place where we talk about PEFT-QLORA fine-tuning, where we talk about PEFT QLORA fine-tuning, where we're modifying behavior by updating fewer quantized parameters using factorized matrices. And so the process as outlined in the QLORA paper and the process that you're going to see today is something like this. We download the model weights. Anytime you download model weights from Hugging Face, they're always going to be in full precision, 32-bit. Then we load our parameter efficient fine-tuning model into GPU memory. Anytime we load into GPU memory for inference or training, we're going to be loading using that parameter efficient fine tuning method. And then we'll initialize our low rank adaptation, our LoRa configuration. And finally, and this is the key, this is the key to the whole thing, is that during training, what happens is we have the full precision 32-bit model, and we're going to actually load the 4-bit model, quantize 32-bit down to 4-bit, for training. Quantize 32-bit down to 4-bit for training. Now, during training, we're going to flow through the network, and we're going to, as necessary, each time we have to do a computation, each time we have to calculate something during our training process, we're going to de-quantize that 4-bit representation back up to a 16-bit half-precision representation. We're going to do the calculation, and then we're going to re-quantize back down. And at each step of our training or fine-tuning, we're going to quantize, de-quantize, move on. So we're never holding that half precision fully in our GPU memory. But rather, we're simply using half precision to do the calculations. This is the magic of what's really going on behind the scenes. And it turns out this works incredibly well. And again, that intuition behind the 16-bit piece is that we saw that for inference, you can go from 32- down to 16 bit and get very very good results we saw this experimentally over a lot of time not just papers from the university of washington but also papers from many other researchers and this q laura approach fundamentally Fundamentally, is to load those full precision weights into GPU memory as quantized 4-bit weights. And then only de-quantize up to 16-bit during calculation. Back down as it moves through. All right. So this is the core approach that we're going to see today. You're going to see things like this. This is the bits and bytes configuration. And you'll notice when we want to load in, we want to load in in 4-bit. You're also going to see a data type called NF4. Chris is going to talk a little bit more about it. It's very important. It's very essential to the QLOR approach. And that's it for the big ideas we need to really see how this build can be taken to the next level. So what we wanna do is we wanna take the same build that we've already looked at, the old UNO reverse card build, given the response, predict the instruction. We want to use the same model that we saw last week because it's still one of the best out there. Mistral 7B instruct V0.2. And we're going to use the same data for fine tuning. Just keep everything simple. That Alpaca GPT-4 data set is there. So again, output response, predict input instruction. And with that, we're ready to kick it back over to Chris, the wizard, to show us how to do fine tuning with PATHQ, Laura, and fill in some additional details. Wiz, over to you, man. Q Laura and fill in some additional details. Wiz over to you, man. Oh yeah, thanks Greg. Really appreciate it. And guys, I'm excited because quantization is definitely one of my favorite topics. It is the kind of like one of the best things we could do right now. And as you can see, we only used around 20 gigabytes of GPU RAM to train this 7 billion parameter model, which is quite impressive in my lens. That includes fine tuning. In any case, we'll get right into it. First of all, we're going to be using Mistral 7B Instruct V02. This is just Mistral's most recent Instruct tune model. I love it. And we're going to now move on from PEFT, which we discussed last week, into the Q in QLORA. So we discovered or we discussed, you know, the idea of how we can reduce the number of parameters that we train. But now how do we reduce the size of the parameters that we train? Now, first of all, what is quantization? Greg already talked us through it. I'm going to give a brief overview here of what's happening under the hood, and then we'll get into how to implement it in code. Spoiler alert, it's super easy. Thanks, bits and bytes. But let's look at what quantization is from this perspective. So quantization is a process of discretizing an input from a representation that holds more information to represent a representation with less information right that's crazy so the idea is we want to express more information with less information so how do we actually do that well in the tim detmer's q laura paper they rely on this process called blockwise k-bit quantization which which sounds, you know, like, very, you know, scary, but it's not so bad. It relies on two very important things. One, it relies on the fact that in neural networks, the model weights are mostly normally distributed. So as soon as we, if you're, if you're coming from a stats background, as soon as you hear that word normal distribution you you know your your eyes should light up uh you know we're we're going to be able to make use of a lot of very clever tricks uh to help us do whatever we're trying to do um and then it also relies on this idea of the nf4 format which which is a number format or data type created by Tim Detmers and team, which is information theoretically optimal. Now, not literally, it was proven this is not literally true, but it is empirically, for all intents and purposes, this is a fact that NF4 is very, very efficient, which is excellent. So how does this work behind the scenes, right? So, okay, we get it. Model weights are normally distributed. That's great. So what we're going to do is we're going to essentially put a pin in the number line that is near to the mean, right, of our desired numbers, which are going to be in a distribution. And that distribution is going to be normal, right? And then we're going to kind of use that mean as a zero point. And we're going to use this NF4 data type, which is a zero centered number format to represent the numbers that appear around that specific point in the number line. So there's a step that needs to take place here. We're going to normalize all of our numbers to be within a specific range of minus one to one. And then we're going to be able to have this idea of a saved place on our number line that we're going to understand a range around. And that's really about it. Now, it's a bit simplified and it's definitely, you know, you can look at the paper for the math. It's great. But the idea is that we have, we kind of drop a pin in the number line and we have this NF4 number format, which represents a range around that point to the number line. And that is what's going to build up the buckets or bins that we're going to use to represent our numbers. And the reason this works so well is again, because of the fact that model weights are normally distributed and because this is an informationally, theoretically optimal data type for that minus one to one range. So this is specific Yennefors for that minus one to one range for normally distributed, to one range. So this is specific, the n of four is for that minus one to one range for normally distributed, well, distribution. So that means the only reason this works is because of this first fact, right? Now, beyond just that, QLORA does an extra step. So you might have thought to yourself when I said drop a pin in the number line, right? Well, okay, if we drop a pin in the number line, that's all well and good, but doesn't that mean that we have kind of like a high precision number, right? It doesn't have to be as high precision perhaps, but it's definitely still high precision. And that's true, right? That pin we drop is high precision. Well, it can be used to represent many numbers. In this case, you know, 64 numbers from the QLORA paper. So each pin is associated with 64 numbers. Tim Demers and crew said that's not enough. You know, that's going to give us 0.5 bits per parameter of overhead, right? So we need to go bigger. So what they did is they actually took all of those quantization constants. That's the technical term for that pin that we're dropping, right? We take those quantization constants, and then we also quantize those. So we represent our quantization constants in an 8-bit format, and we do 256 of those for every 32-bit precision number. So we have one 32-bit precision quantization constant that sits on top of 256 8-bit quantization constants, which sits on top of each of those sits on top of 256 8-bit quantization constants, which sits on top of, each of those sits on top of 64 4-bit. So you can see the savings in terms of memory here is insane, right? We're able to represent so much of our data in that 4-bit representation. And we're also able to do it in a way that retains a ton of information. And that is key. I saw some questions in the YouTube chat kind of concerning, you know, what's the trade-offs here? What's the performance gains? And there definitely is some when it comes to latency. We'll discuss those as we move through the rest of the notebook. But in terms of the actual effectiveness of the model, the performance hit can be very small. It is not zero. There is a performance hit, but it's incredibly small, which makes this a very effective technique, especially when applied in the way we're going to see it applied today. So that's basically what we're talking about when we talk about this idea of QLora, right? We're talking about dropping a pin on the number line and then saving kind of numbers or representing numbers around that and then doing that one more step abstracted which is harder to visualize but there it is okay so how do we do it in code now right uh well first of all we gotta load our our kind of familiar usual suspects here so we're bits and bytes data sets accelerate uh the laura lib library transformers and peft uh these are all kind of staple libraries we're bits and bytes data sets accelerate the Laura lib library Transformers and peft these are all kind of staple libraries we're going to be using when we're using these uh kind of Q Laura tools and then we're going to grab our model and the model we're going to grab is the Mistral AI Mistral 7B instruct v 0.2 it's the most recent uh instruct model for Mistral it's a great one and then this is kind of uh you know where the magic happens this is the bits and bytes config uh this is from the bits and bytes library we're gonna see that we load in four bit so this means when we actually move our model from those saved weights uh that exist on our on our drive, when we load those into our GPU, we're going to load them in that four-bit quantized state, right? So that's that collection of numbers and then their quantization constants and then their quantization constants because we're using this use double quant, right? If we omitted that use double quant, we would only do one step, and then we would be saving less effective memory. We're also going to be using the quant type of that NF4 I talked about. That's the Tim Detmers and crew created number type, which is information theoretically optimal. Again, not literally true, but it's close enough, so we'll keep saying it. And then we're going to have this idea of a compute D type, which is going to be torch B float 16. Now this is very important, right? So when we store numbers in 4-bit, that's awesome. But when we try to compute with them, it's really bad. It's actually quite bad, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? We usually wind up with a number that is relatively needs more precision to fully accurately understand it, right? When we divide 100 by 1000, we wind up with a very, you know, a small number. And the idea is that we'll need more precision to represent that very small number. So what we do with the QLORA approach is we actually de-quantize whatever we need to compute with our weights. Now, this is done at a per-tensor level. So we never have the full model de quantized in memory, just one tensor at a time, right? So this saves us a ton of a ton of space. And it also lets us have the ability of computing as if we have this model in that higher precision or B float 16 format, right? Which is huge. So we're saving tons of space and then we're de-quantizing. So we also retain some of that compute precision. And that is what lets this method really shine, right? The fact that we de-quantize for computation and then we store in 4-bit. I think without that, this would be a less powerful method. But with that, it's amazing. You can choose up to full precision here. Obviously, that is going to come with some small memory overhead. You do have to upcast a tensor to the full precision, but it's negligible compared to the size of the model. And it does also, and this is critical, it does come with some inference and training latency overhead, right? The fact that we have to de-quantize and re-quantize, de-quantize and re-quantize, this means that we're performing an additional operation per computation. And so that is going to impact inference. Now, Tim and team have written some great kernels for this. So it's not very slow, but it is going to be slower than if we weren't doing that extra operation. And so this is one of the key trade-offs, right? We had questions about trade-offs. One of the key trade tradeoffs with Qlora and with the bits and bytes approach is that it is extraordinarily flexible. It is very powerful and it works very well with a PEFT adapter methods. So like LoRa and others, but it does cost us a little bit of inference latency in training time. So that's important to keep in mind. Once we have our bits and bytes config loaded, all we have to do now is just load our model like we normally would. So auto model for causal LM from pre-trained. We're gonna pass in our mistral AI model. We're gonna pass in our quantization config. We're not gonna need the cache and we're gonna map this to auto, which is gonna shove as much as it can into our GPU. In this case, again, because the actual model loaded only takes up about 15 gigabytes of GPU memory, it's all squeezed into the GPU there. So that's great. We do some pre-processing on our tokenizer to make sure that it's set up in the right format for training. And then we can look at our model architecture. You'll notice that we have this four-bit layer, right? This four-bit layer is where that bits and bytes comes in. You'll see that we have the four-bit layer on our QKVO proj as well as our MLP. So it's all four bit, all the way down. This is the idea, right? We don't want to just quantize some of the model. We're gonna quantize as much of it as we can. However, you will notice that we omit some of the layers, specifically we omit our layer norms. And the reason we omit our layer norms is we know that our layer norms. And the reason we omit our layer norms is we know that our layer norms are going to tend to a very, very small number, you know, near zero. And we're going to run into some training instability issues if we use lower precision to represent these layers. So we're actually going to keep those in full precision. Now they're very small compared to their weight matrix counterparts, but we do want to make sure that we're keeping those layer norms in a higher precision. This is to avoid training instability issues, right? If we have these numbers kind of diverge and cause a ruckus, right? We're not going to be able to train very well. And so that's why we don't see those four-bit layers here. Now that we have our model loaded, we can see that it's in four-bit. We're very happy about that. It's time to peftify it. We talked about peft last week, so we're not going to spend too much time on it today, but the idea is fairly straightforward. We are going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to set our alpha, which should be by conventional wisdom, about twice your rank. Though you're, you know, again, it's always worth doing hyperparameter searches here to make sure you have the most optimal hyperparameters. Your LoRa dropout, pretty consistent value. Your bias is none. Task type is causal, because that's what we're doing. You'll also notice that we have our QVK proj modules. We, again, with QLoRa, we want to target as many modules as we can, right? The QLoRa paper's wisdom is that we should actually target all possible layers of LoRa. In this case, we're just going to leave it up to PEFT to simplify things a bit for us. For our base model, all we have to do is prepare our model for k-bit training. This makes sure that we can train and that all of the trainable layers are set appropriately and that any frozen layers are also set appropriately. And then we're going to get our PEFT model and our PEFT model is going to uh give us those laura layers now you'll notice that we have only 2.7 million trainable parameters out of a possible many billion trainable parameters right and the key thing about q the q and q laura right is well is great, when we make each of these parameters one eighth the size, right, we're effectively reducing this by another factor of about eight. It's not strictly eight because of the fact that it doesn't interact with all layers, but the idea is it's about eight another factor of eight reduction in the uh in the total size of parameters that we have to train which is insane right it's uh we we went from kind of we're already at a fraction of a percentage and then we even further reduce uh the amount of actual uh work that we have to do, which is great. And then we can see here that our LoRa layers are also 4-bit, right? We have our LoRa layers are 4-bit as well as our actual, you know, regular layers that were converted to 4-bit. After that, we're going to load some data. We're just going to grab the Apaka GPT-4 data. We're going to do this Uno reverse card train, just a fun one. It's kind of like the classic now. I think this is what you're going to see. Whenever you do an instruction tune, it's just fun and it really proves the point that the process works. So we're going to ask the model to take a input and then generate an instruction. So we're going to create a model that's good at generating instructions. We're going to use this generate prompt helper function in order to create these prompts that our model will be trained on. And then we're going to set up our trainer. Our trainer, this is all boilerplate. The other big insight from the QLora paper is this paged Atom W32 bit optimizer. I'm not going to go too into it here, but the idea is that this idea of using paged memory is really, really effective, and it helps us train very stably and very efficiently with very little cost to us other than we have to flag it. The rest of this is all boilerplate, right? It's good boilerplate, but it is boilerplate. And we are going to make sure that we have our BF16 equals true, which is going to make sure that our compute D type is compatible when we upcast, which is necessary. It says CUDA, but would a Mac suffice to fine tune the model to the 4-bit? I would recommend a GPU, a NVIDIA GPU for sure. The kernels are written for it. I believe you can use 4-bit on other devices, but it's not necessarily going to be as efficient or as fast. The optimization of the kernel really added some speed to this process but i'll get back to you uh more about that uh after a little bit of digging to make sure that you can do this on mac even if it is going to be slightly less efficient uh we're going to use the trl trainer the sft trainer from trl in order to train our, our max sequence length of 2048 just for Mistral itself, and then we can train this using trainer.train. At the end of the day, we reload our model, just a quirk of path. We reload it. We make sure we load it in 4-bit, and then we have our torch D type for float 16. That's the compute D type again. And then we are going to, you know, look at the model. So we say in instruction, identify the odd one out among Twitter, Instagram, and Telegram. That's great. That is, that's an instruction that would result in this, in this, you know, in this kind of the odd one out is Telegram response. And you can see the ground truth is identify the odd one out is telegram response. And you can see the ground truth is identify the odd one out. And if we look at the base model, we can see that the base model's instruction is much less good. It does not even mention telegram. And so, not a very good instruction. But that is it for me and the code demo. So with that, I will pass you back to Greg who will wrap us up. So with that, I will pass you back to Greg. We'll wrap us up. Yeah, thanks, Chris. That was awesome as usual and love that deep dive explanation on exactly what's happening within the quantization method in the QLORA paper. So today we saw building off this PEFT-LORA approach, Today, we saw building off this PEFT-LORA approach, that PEFT-qLORA fine tuning is really about modifying behavior by updating fewer quantized parameters using factorized matrices. So this idea of using fewer parameters and of using the LoRa factorized matrix approach. This gets us from 3.8 billion down to 2.7 million parameters, less than 1%. And then we come in with quantization. This is technically blockwise k-bit quantization, effectively just allowing us to express more information with less. And the key to the QLoRa method is that from that 2.7 million parameter level we're coming in and we're starting to actually quantize that down to four bit before we we begin training during training we will de-quantize when we have to do computations and before re-quantizing to continue the training process. Next week, we are going to be covering how to not fine-tuning and loading, but now serving an inference with VLLM. So we hope you can join us for that one. But for today, we're going to go ahead and get started with the Q&A period. I'd love to invite Chris back up to the stage. And if you guys have questions, it looks like Manny is crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, throw it in the Slido. We'll also try to get to your questions if you throw them in the YouTube live chat. But Chris, let's go ahead and jump right into it here. First question. Is the reason we don't get inference latency benefit with QLORA because model weights are re model weights are retained as 32 bit during inference. I mean, I, yeah, I mean the question, uh, to be more specific about, uh, the phrasing, I think we could say that the, the model weights are de-quantized to a higher precision during inference. So yes, that is why we don't see a benefit to inference. In fact, we see a penalty. It's not a big penalty, but there is a penalty. And so, but yes, that's exactly why. Oh, okay. Nice, nice. Yeah, excellent question. Astute one there. And then first one from Manny here. When we're talking about parameters, are we referring to additional features such as Xs in the equation, Y equals predict X1, X2, Xn? Are X1 to Xn considered parameters? What are we talking about when we say parameters? Yeah, parameters, features, it's all numbers, weights. I mean, we have so many different names for similar kinds of objects. I would think of parameters more specifically as the entities that fill up these weight matrices that we use to compute when we're actually doing that matrix multiplication. But yes, I mean, essentially a parameter, a parameter is any node in the, in the model architecture, right? So this is not something that you're going to want to use with like your XG boosts or your, you know, your kind of traditional ML methods, right? It's not like a random floor forest applicable, you know, technique. It's specific to that deep neural architecture. And it's also specific right now to that transformer architecture, though there's no reason it needs to be. It is most explored in that space. Hopefully that answers the question, Manny. Yeah, yeah. Well, we'll kind of flow through some of these other questions and pop back to Manny's questions as well. I think this one's super relevant to everybody. If I don't have a powerful laptop, where can I practice these techniques? Honey, P, it's Colab. Get yourself into Colab. Colab makes it so easy. And the whole benefit of this kind of thing is we can load these very large models with very little resource. And so oftentimes, you can load like a 3 billion or 6 billion parameter model, you can load that in a free instance of Colab right using the free free tier GPU, the T four. So it's I think that's a great way to start if you don't have a powerful laptop. As you get more embroiled in the space, you might look at other cloud hosting solutions, Lambda or AWS, whatever you want. But for the getting started beginner, I would say Colab is your best friend. If you want to, you can pay for compute so you can pay to get a little bit more uh beefy gpus but uh stick to the free tier and and stick with your kind of three to six billion parameter models and you're gonna have a great time yeah yeah yeah yeah stick to the three to six bill quantize quantize quantize quantize and uh and then colab like we we teach entire courses in collab and we do a ton of fine tuning throughout so you know just try to be as efficient as possible don't sit there and do tuning for you know days and days at a time if that's not really something that you're interested in you know use small, try to make the model as small as possible through picking the small size of Hugging Face and then quantization for sure. But yeah, there should be nothing stopping you if you're a beginner. You don't have to get AWS. You don't have to get all these things. Okay, Islam, we got a question that's getting upvoted here. Can I do this fine tuning with Lama CPP? And is this fine tuning possible to plug into the end-to-end fine tuning within a RAG framework? So E2E fine tuning within RAG framework, yes, 100%. The RCAI, we've done an event with them. Their DOM framework and GitHub, we'll get a link for you guys to drop into the chat. That is 100% a great tool that does leverage or can leverage LoRa as well as quantized methods. In terms of Lama CPP, I'd have to double check. I don't know off the top of my head, but I will double check and then we can include that information in a comment if I'm unable to find it before the end of our time together today. Okay. All right. Back to Mandy's next question. We say weights and biases when we talk about ML models or neural network models. So if weights are parameters, are we saying weights and biases that are parameters in the LLM world are weights and biases parameters? Let me think through this question. world are weights and biases parameters? Let me think through this question. We say weights and biases when we talk about LLM. So if weights are parameters, are we saying weights and biases parameters? Like our bias is also parameters? I guess is that the question? No. But yes. I mean, I mean, at the end of the day, the thing we care about is the weights. That's, that's, that's, that's all answer this question. We want to update the weights, aka the parameters. Okay. All right. Good stuff. Then I'm gonna go ahead. Last manny question here. Can you speak about examples of LoRa adapters? Like, what are they? And what are they created for? a tool perspective. So let's say we create a LoRa adapter that's very good at translating natural language to SQL. And then we create another LoRa adapter. And that LoRa adapter has been fine tuned to translate natural language to Python. Then we create another adapter and you you see you can kind of go on that the idea is that whenever we do inference we can choose whichever of those adapters or those laura layers to flow information through that's going to make our output consistent with what we fine-tuned it to do so you can you can think of them as little hats you can put on your model that's going to change its behavior, but it doesn't touch the, it doesn't modify or it doesn't have to modify the base model at all. Just kind of this hat that sits on top of it, but gets it to do a different job. And the idea is that we can choose those hats as we want, even at time of inference, we can choose which hat we want it to wear. Yeah. Yeah. And I mean, you know, this is like the thing for businesses too. It's like, if you think about these adapters, man, it's like they're plug and play. And so if you want the LLM to do something super specific, that prompt engineering has only gotten you so far and you just can't get what you need exactly to get in and out in specific ways with your customer or your user. If you want to really constrain what your user can put in, you want to really constrain what comes out, this fine-tuning piece, this lore adapter piece is going to be like your friend. You know, we had a great meme that we posted on LinkedIn recently where it's sort of like if you're doing fine tuning, you're kind of doing LoRa. So it's sort of like this is a big question. You know, examples of LoRa adapters would be like anything that you fine tuned, you know, you might say and. OK, we've got a couple of minutes left. I'd like to shout out out to you know thanks for the great note just want to appreciate your efforts uh appreciate a lot it looks like we've got uh george i think he's struggling with a specific error maybe we can comment on that after the the event he's he's put his error into slido as well um i guess uh last question this is a big question. So you can take maybe two minutes, Chris, what are the trade-offs of using dimensional reduction techniques like LoRa, QLoRa, PEFT on LLMs in terms of training, inference, fine tuning? Like when you think of trade-offs, maybe best practices here, what do you think of? I mean, the big one is quality or like how good the output is uh there is a trade-off there it's really small and beyond being really small it's really small like so okay this is this is the way i think about trade-offs when it comes to laura and and the crew uh i can i can find you the laura model right to be let's say like 98% as effective as full fine tuning, right? But I can do that in a 10th of the time with a thousandth of the resources, right? So divide by a thousand, the number of resources. I mean, that is a trade-off. There is a trade, you're losing 2%. But like, it doesn't feel like a real trade off. And especially in terms of business value. It's not like a, it's not a real trade off these days, like, especially if you use a high enough R or rank in your your Laura, so you're using that kind of 128 are, you're still getting a massive reduction in compute but you're retaining so much of the performance that it it truly doesn't feel like a trade-off it there is a trade-off to be clear there is always technically going to be a trade-off but it lets you do things you wouldn't be able to do so it doesn't feel like a trade-off i I mean, for small companies, you can fine tune a model that does a whole new novel thing that fuels your business, that is your business, right? That you just couldn't do if you didn't use these methods. In that case, there is no trade-off, right? It's enabling you to do something that was previously impossible to you. That's only advantage. When it comes to inference specifically, possible to you that's only advantage uh when it comes to inference specifically both uh the the Q Laura or any quantized uh method using bits and bytes and Laura if you're talking about non-merged Laura adapters do impart a small inference latency penalty it is. At scale, it can maybe be felt, right? If you're really getting to those hundreds of thousands of requests per second compared to a very efficient model, you might want to re-quantize that to another format and serve that model directly instead of having it part of your LoRa stack. But again, these are problems that come with scale and that scale kind of also helps you fund the solution. But outside of that, you're not going to feel these issues until you're into the six figures or more requests per second for your kind of LLM stack. So I would say there are trade-offs, but when you're getting started, they really don't appear as trade-offs. All right. Yeah. Okay. So use PEFTQ, Laura, unless you got a million requests per second. Sounds like a plan, dude. All right. Cool. Let's go ahead and wrap it up. Thanks, Chris. And can't wait till next time. Thanks, everybody, for joining us today. Again, next week, we'll be back talking inference and serving and how to do it efficiently with VLLM, one of the hottest open source tools out there for doing that. We'll tell you a little bit about the tool and its background. If you like this session, you might also really like cohort four of LLM Ops, LLMs, and Production launching February 13th. In that course, which we're going to be soon announcing an expanded curriculum for, you'll learn to prototype and scale production LLM systems, including using RAG techniques, including fine tuning, and so much more. Check it out in the link. And then lastly, please share any feedback you have on today. You can drop it in the chat or you can drop it in the feedback form. That will drop to you now. And that's it for today. Until next time, keep building, shipping, and sharing, and you know we'll be doing the same thing. See y'all next week.
Fine-tuning with QLoRA (Quantized Low-Rank Adaptation)
3,710
AI Makerspace
20240111
​GPT-4 Summary: Discover how to supercharge your LLM application development by mastering quantization, a game-changing technique that dramatically reduces the size and computational demands of large language models (LLMs). In our upcoming live event, we'll dive deep into the essentials of quantization, demonstrating how it makes LLMs more accessible and cost-effective for tasks like loading, fine-tuning, and inference on limited hardware. Learn the ins and outs of using the bitsandbytes library to load quantized LLM parameters for our Mistral-7B demos, and explore advanced fine-tuning techniques with QLoRA, building on the principles of Parameter Efficient Fine-Tuning and Low-Rank Adaptation (PEFT-LoRA). Whether you're working on development or in production, this event is your key to speeding up the LLM application cycle. Code will be provided, ensuring you have everything you need to implement these strategies effectively. Don't miss out on this opportunity to elevate your LLM projects! Event page: https://lu.ma/quantization Have a question for a speaker? Drop them here: https://app.sli.do/event/7CrWMfvZg2NXWh6aYsKkfr Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/greglough... The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our next AI Engineering Bootcamp on Maven today! https://maven.com/aimakerspace/ai-eng-bootcamp How'd we do? Share your feedback and suggestions for future events. https://forms.gle/u63yUJRD9AijuTE98
2024-06-13T21:42:15.597175
https://www.youtube.com/live/kV8yXIUC5_4
Hey Chris, is it true that we can use PEFT-LORA to train less than 1% of the trainable parameters in LLMs and still get great results? That's absolutely right, Greg. So how much data do we need to make that happen? It's a lot less than you think you would need. So you're saying that just with a little bit of data, a little bit of fine tuning, we can mod the behavior and performance of LLMs to be efficient for our tasks, our applications, and our businesses. That's absolutely correct. Yes. Dude, that's so cool. I am pumped to dig into this today. We'll see you back in a bit dude for the demos. Hey, everybody. My name is Greg Loughnane, and I'm the founder and CEO of AI Makerspace. Thanks for taking the time to join us today. As we kick off the new year, we want to demystify one of the most mature aspects of building with LLM applications for you today. And that is fine tuning, parameter efficient fine tuning, and low rank adaptation or LoRa. And this will hopefully help you really get started on the right foot in 2024. We'll be collecting questions today via Slido. So please drop your questions directly into the link in the chat and also upvote your favorites. Of course, Chris, my friend, CTO of AI Makerspace and the LLM wizard will be back to lead our code demos soon. So let's get right into it today. Let's talk efficient fine-tuning of LLMs with low-rank adaptation or LoRa. As always, we want to make sure we align our aims for the session. If you're joining us, this is what you will get out of today. What is fine tuning? What is PEFT? What is LoRa? How can we actually fine tune these LLMs? Let's see exactly how it's done with the latest and greatest tools from Hugging Face, including one of the latest and greatest open source models. So to contextualize this a bit, models. So to contextualize this a bit, first we want to talk about as we build LLMs, why this idea of fine tuning is important in the first place. Then let's talk about fine tuning proper and then dig into this PEFT, this library, this idea of doing this a little bit more efficiently. Finally, we'll talk LoRa or low-rank adaptation of LLMs before we see Chris back to lead us for some demos. And then, of course, any questions you have, we'd love to answer them. All right, so when we prototype, when we're building those LLM applications in 2024, right now, we generally think, okay, we want to prompt engineer as far as we can. We want to kind of get this thing up and running in a minimally sufficient one-shot, few-shot type of way. Then maybe we want to sort of ground our LM application in some of our own data, some of our own context. This is where retrieval augmented generation comes in, as well as, you know, sort of just the idea of this question answering system. And finally, after we do some prompt engineering, generally step two would be doing some RAG. We're often looking at fine tuning with a keen eye and trying to understand when fine-tuning becomes of use to us. A lot of people are asking, do I do RAG or fine-tuning? And the answer is, well, it depends. It's not always linear like this, as OpenAI has sort of put out there. There are actually two different things that we're optimizing when we move from prompt engineering into RAG or fine tuning. And the fine tuning piece in particular is what we're talking about is we're talking about the way we want the model to behave, the kind of inputs we want it to get, and the kind of outputs we want it to provide. Of course, general large models can have general inputs and general outputs, but more specific task-oriented models like the ones we'll need in our businesses, they're going to be a little more dialed in, a little bit more fine-tuned. And oftentimes, as you build these applications, you're going to have to go from prompt engineering to RAG to fine-tuning and back again to get this thing to a human-level performance that you're happy with to first deploy it to production. And then once you're in production, of course, continuing to dial in the RAG systems and to fine-tune the LLM as well as the embedding model is going to be very important as you try to serve more and more users, stakeholders, customers better and better. It's also worth noting that as we head into the new year, one of the things that's gaining a lot of traction right now is this idea of small language models. Here we see this word efficiency that we're going to see a lot today. Microsoft put out in the State of AI report, this was noted, last year, a couple of papers, and one of them was called Tiny Stories. And it was sort of interesting because it kind of compared the 125 or so million parameter models like GPT-NEO or GPT-2, the small versions, that really don't do a very good job at generating coherent text sort of out of the box to a much smaller model instead of 125 million parameters, just 10 million parameters, but trained on synthetic data that was made up of stories using very, very simple language, the kind of language you could talk to a three or four year old with. And the tiny stories data set showed that it was possible to generate smaller, much smaller language models that really gave some pretty solid English generation. Whether you're training at 10 million parameters or you have a few more parameters but you just have one transformer block, this was shown to be possible. Also when we got specific about just coding, that's where this other paper, Textbooks All You Need, came in, where instead of training a huge model and then trying to make it good at coding, it was simply taking a relatively small model, 1.3 billion parameters, still very, very large in the context of all models we've ever seen. But when we think about the leading models of today, there are at least 7 billion or so. And fine-tuning that and fine-tuning that on, quote, textbook quality coding exercises, where there was some taken from textbooks, some synthetically generated, and really produced a very small, relatively speaking, only 350 million parameter model that produced great results. These are worth highlighting because whether we're talking about training from scratch or we're talking about fine tuning, we're kind of talking about the same thing and we're kind of talking about dialing these models in for our use cases. And that's exactly what fine tuning is all about at the end of the day. So when we talk about fine-tuning, as in the latter case of textbooks are all you need, we're talking about taking a language model that was trained using unsupervised pre-training, probably some combination of instruction tuning and other alignment techniques as well, although not in all cases. So we're taking that base model that we get from this initial training, and it could be a chat model again, or an instruct tune model, but then we're kind of taking our specific instruction examples. and those might be very specific. They might be generated based on our specific product line. But those examples are the ones that we then take and we update our language model to be very, very good at, producing this supervised, fine-tuned model that's really more dialed in to the kinds of inputs and outputs and specific instructions associated with what we want the model to be good at. Because of course, these models can be good at so many different things out of the box. So as I alluded to just a moment ago, whether we're talking about fine tuning or training, we're talking about the same thing. We're talking about updating the weights or the parameters within our large language models. And this is not new to anybody that's been around the game for a while. I mean, training a simple, deep learning neural network is not fundamentally different than what we're doing when we do fine tuning of a large language model, although there are some key differences that we'll highlight today. The mechanics are there. We have inputs that we pass through the weights, and those weights fundamentally are the things that are going to be updated. They can be updated through back propagation, like the classic technique. We're also seeing some really interesting things happening out there in the space where you can do fine tuning without even using back propagation. And so expect a lot more innovation in this space, especially as small models become more and more popular. Chris did a really great example of how to talk through Laura at an even slightly deeper level. He had a bit longer form opportunity on his YouTube channel that I highly recommend checking out if you want to dig further in on the specific visualization of weight matrices here. But we're going to today sort of go from this mechanics of fine tuning into this idea of PEFT. I mean, this is a great meme because it's so true. This is sort of like the new AWS mistake, right? Is do I really need to fine tune this like that again? Do I really need to spend this much compute messing with things? And this is where we get into this, let's be a little bit more efficient space. PEFT is parameter efficient fine tuning. And the big idea, the problem that it solves is when you fine tune on downstream tasks specified by those data sets of specific instruction and output responses, you can get huge performance gains compared to off the shelf LLMs, zero shot, right? So zero shot is not really providing a ton of specific instructions, not providing a ton of examples. So you can get pretty far again with prompt engineering. And so when this fine tuning comes into play, that's going to be a judgment call for you as you continue to build and prototype your LLM applications. And when we think about why PEFT, well, we've kind of covered it already, but LLMs are so big and billions and billions and billions of parameters are hard to deal with. They're hard to load into memory. They're hard to do inference on. They're hard to just manage. to sort of leverage the best parts of them but not do it in such a way where we just can't ever get it done on the machines that we have or the cloud instances that we have access to because this problem isn't going away as we've seen these models although they will continue to get smaller and smaller things like mistral and others are are pushing those boundaries on the OpenLM leaderboard all the time. We are going to see these models get bigger and bigger still as we approach and move towards AGI. Fundamentally, we want these models to get bigger and bigger. We want to see what they're capable of. So the biggest labs will be generating bigger models. Some of the startups out there like Mistral will start to see generating smaller models. But fundamentally, we're talking about these weights of the trained model, the parameters in the trained model. When we download a model, we're downloading the weights, we're downloading the parameters. This is the same thing and those parameters are always are always represented by a certain number of bits and generally if we download in full precision we're dealing with a lot of parameters this gives way to the idea of quantization which we'll cover very soon in an upcoming event next week and we'll see a little bit of quantization today when we see Chris's code demo. But it's important to understand that we're just dealing with these weights, these parameters, and it's important to find ways to be more efficient when dealing with them. So when we think about training those neural networks, training those parameters, we need to think about it's the same as fine tuning, but it's not all exactly the same. Because when we have a pre-trained network, those weights are pretty solid. We're not going to change every single weight a whole bunch when we do fine-tuning. For instance, in this simple example of sentiment analysis, if you were trying to do some fine-tuning for a specific domain, you imagine many of the weights within the network wouldn't be particularly different, although some might be. And this is how we want to think about fine-tuning. We're not changing everything. We're simply fine-tuning. And there are two challenges when we do fine-tuning. When we do full fine-tuning, even though we don't necessarily have to change every weight a whole bunch, we're changing all of the weights. This is very hard to do on our own hardware. And again, when we do inference and we try to store and deploy these things efficiently for low cost, it's very, very hard to do. We need smaller models. And so the solution is leverage a smaller version of the large model by only dealing with a smaller number of weights and then only dealing with a smaller number of trainable weights when we do fine tuning. This is exactly the approach we'll take today. This allows us to have models that are better not only at out of data set pieces of information we want to do inference on, because fundamentally they're more general, but also allows us to do really a whole lot with not very much data, which is very, very cool. So from PEFT to LoRa, this image probably doesn't mean a whole lot to you if you're not familiar with LoRa yet, but after this brief discussion I think it will. LoRa as sort of an entry point for those of you that are complete noobs to this is simply the number one parameter efficient fine-tuning method that you should go out and learn today. It's the first one to start with. From there, you can start to dig deeper. But what's really going on in LoRa and what's the intuition behind why it works? Well, the intuition for LoRa is similar to the intuition behind why fine tuning works. We have these pre-trained models and not everything within those models really matters the same amount. They have a low, quote, intrinsic dimension. That intrinsic dimension, intrinsic simply means essential, that natural dimension of things that matter when it comes to the behaviors we want the model to exhibit is what we're really focused on. So we can actually change not every single parameter a whole bunch, but we can change just those key parameters. And of course, identifying those key parameters is science unto itself, one we expect to see a lot more progress in the space in 2024 on, but the sort of key intuition and the idea is that we don't need to change all of them a whole lot all the time. When we talk about LoRa, we move from this idea of intrinsic rank. And LoRa is really focused on specific matrices within a transformer architecture. So the hypothesis that the authors put forth here is that the updates to the weights also have a low, quote, intrinsic rank during, quote, adaptation. So we have this idea of intrinsic rank and we have this idea of adaptation. So let's talk about rank first. Well, hearken back to your linear algebra days in school and maybe you recall that rank is the number of linearly independent columns in a matrix. Okay, that's not very helpful. If we think about it as a weight or parameter matrix, maybe it's slightly more helpful, but still not super helpful. What we want to think about is these linearly independent dimensions are the ones that provide unique information about what we need to solve the problem we're solving to exhibit the behavior we want to have this LM exhibit to have that task be completed properly. This is a screenshot of the Word2Vec embedding model shown in three dimensions through principal component analysis, but there's a 200 dimension underlying representation at play here to put each of these words relative to one another. So each of those 200 dimensions, you might say, matters. The question is how many dimensions matter in your language model and for your task? And this is sort of what we want to ultimately drive towards in this small language model domain. Now, as we think about how LoRa works beyond this idea of matrix rank, we want to think about the two-step. Step one is for it to freeze all pre-trained weights. This is cool and this is the sort of adapter piece. Then we want to sort of be able to create these adapters, these injectable, trainable, rank decomposition matrices that we can plug and play into our pre-trained model weights, into our initial LLM that we downloaded. And so these sort of plug and play adapters are made up of these rank decomposition matrices. Again, if you go back to school, talk about linear algebra, decomposition of matrices is just about chunking big problems into smaller ones to make things more computationally efficient. If you guys ever use MATLAB back in the day and just tried to invert the million by million matrix real quick you can definitely make matlab go burr and it's pretty fun but there's better ways to do this we don't try to invert million by million or billion by billion matrices and we don't try to do really really computationally intensive things with matrices when we're dealing with llmsMs either. This is where LoRa comes into play. Finally, we want to understand that LoRa is going to be used in a transformer context. It's not simply a feedforward neural network, although it's helpful to use that image as we get a handle on it. We did a really long form deep dive on transformers. We'll probably do another one soon. So if you're interested in that, definitely check that out. It's transformers we'll probably do another one soon so if you're interested in that definitely check that out it's uh transformers for a couple of hours there and then the important point here is that laura is not changing everything within the transformer laura is associated simply with the attention layer in each of the transformer blocks. So it's not associated with the attention layers and the feed forward layers. It's associated with simply the scale dot product attention that's happening within each of the blocks. And as you'll see, it's not even just, it's not even all of this. It's just the queries and the values, the Q and the V, that we're actually leveraging the LoRa adapter for. And of course, if you're familiar with transformers, you already know that this is an encoder block and this is a decoder block. Many of the common LLMs you'll deal with today are decoder only style. And so that's worth noting here as you start building with decoder only LLMs and leveraging LoRa for fine tuning as well. So LoRa provides a ton of advantages for us. It provides a more efficient way to train fewer parameters. Those adapters are very plug and play. If we simply keep the initial weights and then train a bunch of adapters for different tasks, we can use the same model for a whole bunch of different things. So you can start to get sort of curious about how this might dovetail into hosting and inference and doing a little bit more efficient work on the ops side when you start to deploy more and more of these applications for your company and then you can combine them with other methods you get pretty good results and then a key point here is that pretty good results and then a key point here is that you can merge the adapter weights with the base model and in doing so and this is a an awesome sort of hugging face library thing you can make sure that there's no additional inference latency versus the base model itself. All right, so to recap before we get into the demo today, we talked fine tuning. And when we think about fine tuning, we want to think about simply modifying LLM behavior by updating weights or parameters. We talked about PEFT, which is fine tuning with less weights or parameters. And we talked about LoRa, low-rank adaptation, which is fine-tuning with factorized or decomposed matrices. So, if we put it all together, we have PEFT LoRa fine-tuning, which is just modifying LLM behavior by updating less weights or parameters with decomposition and factorized matrices. So, in a diagram, you can kind of think about PEFT-Ra fine tuning versus regular fine tuning a little bit like this. So today's build is going to be a classic application the UNO reverse card application given a response an LM output we're going to predict the instruction or the prompt or the LM input we're going to pick up one of the prompt or the LM input. We're going to pick up one of the best models off the shelf that we can find that works well with this fine-tuning paradigm, and that's Mistral 7B Instruct V0.2. Definitely one of the best off-the-shelf models you can pick up today. And the fine-tuning data're gonna use is we're gonna use the Alpaca GPT-4 dataset where we're simply going to switch the output and the instruction. You can check out more about this dataset at this paper right here. But without further ado, it's time to get into the demo fine tuning with PEFT Laura. Chris, the LM wizard is back. Let's see him in action. Thanks Greg. Yes, okay, so the idea is pretty simple. What we want to do is we want to do exactly as Greg has outlined here. So, you know, this is a method to fine tune a model on a much smaller hardware than you should normally be able to get away with. Now, you will need to use 40 gigs of GPU RAM for this specific example. During training, it does peak pretty high. So if we look at our GPU RAM, you can see we used peaks of 15. So the T4 might not be big enough for you. You can use a very small batch size, of course, and that will help you kind of cheat around some of the space requirements. But for the most part, we're going to, you know, we're going to focus on using a larger model. Okay, or a larger card. So we're going to use three main libraries here, that's going to be PEFT, transformers, and bits and bytes, we're going to kind of gloss over this bits and bytes step, though it is extremely important. We're going to talk about gloss over this bits and bytes step, though it is extremely important. We're going to talk about that more next week when we talk about quantization. But for right now, we're just going to focus on PEFT and then transformers. So the idea is, you know, we have this model, Mistral 7B Instruct, but we want to make it better at a specific task, right? So we're not trying to teach the model anything new. We're just trying to, you know, get it into a place where it's doing our behavior that we desire a little bit better. So we're gonna use the Mistral 7B Instruct V02 from Hugging Face, as well as we're gonna use the Alpaca GPT-4 dataset. So you can just find this model on the hub right here. And then you can find that data set also on the hub right here. Perfect. All right. So what is PEFT, right? PEFT is parameter efficient fine tuning. Like Greg said, all that really means is that we're fine tuning with less parameters or more efficiently leveraging our parameters. our matrices have inherently or intrinsically low rank uh you know this is discussed in in the papers uh that that you saw greg talk about and so we want to fine-tune our model taking advantage of that kind of trick and so uh the way we set this up is pretty straightforward it might sound complicated uh from the the kind of concept side but from the code side you know Transformers Library Huggy Face has us uh as us uh ready to rock so we're just gonna grab our dependencies and then we're gonna go ahead and we're gonna load our model uh you'll notice that we're loaded loading the Mistral AI Mistral 7B instruct v0.2 as we discussed all this stuff is great basically what this is doing and we'll deep dive again next week but discussed. All of this stuff is great. Basically what this is doing, and we'll deep dive again next week, but the idea is all this stuff is doing right here is loading your model in a very small fashion, right? So we're going to massively reduce the number of bits that each of our weights are going to take, or each of our parameters is going to take up in memory. And that's going to help us load it into a smaller card. So you'll notice that we have this whole model trained and we only peaked at 15.4 gigabytes of GPU mem, which is pretty awesome. After that, we just load our model. Kind of normal stuff, right? Now, we have to do this kind of tokenizer processing. The reason why is we're using a Lama tokenizer as a root. You know, all of the mistral suite of models is kind of like a redux of the Lama model. So we want to make sure that we're doing this pre-processing so our training goes smoothly that's the only reason that's there now when it comes to our model architecture you'll notice that we have these 32 decoder layer blocks each of them has a self-attention layer with QKV proj and then our output as well as some rotary embeddings. You'll also notice that we have this MLP, which is gate up, down, and our activation function. And then we have some layer norms that are also useful. The big thing to focus on here is these attention layers. Now the original LoRa paper, when it came out, kind of talked about how we want to target, you know, say Q and V as a rule. There's been some more information that's come out that says it might be prudent, especially when using things like QLoRa to target all of our model or all of our layers equally. So all of our non-layer norm layers. But for this example, we're just going to use the automatic PEFT version. So what this means is we're going to let PEFT make that decision for us, though you could see increased results if you did target more layers or more modules to be more specific. We're just going to focus on Q and Vproj today. So now that we've got our model loaded, we've kind of decided, you know, we've decided what we want to actually focus on. We need to turn our model into a format that's compatible with LoRa. And the way that we do that is pretty straightforward. We're also gonna use this print trail parameters, helper function just to really see how few parameters we need to use. So the LoRa config has a number of options. The first one we see is R or rank, right? So when we talk about low rank adaptation, this is the rank we're talking about. Now you'll notice that even though we have a low rank, we can go pretty high with it, but we're not going to get as high as say our base dimensions. So in this example, we'll be using 64. No particular reason. R is a hyper parameter. You'll want to do some kind of hyper parameter search in order to determine the best version of R. But for right now, we're just going to go ahead and we're going to use 64 as a example. as a example. Now, the idea of this rank, right, is comprised of the fact that we're gonna have this base model dimension by R matrix factored with this R by base model dimension sub matrices. So we're gonna build these sub sub-matrices, which are going to be smaller, and that's what we're going to update, right? Because as we'll see in the architecture, what we really want to do is we want to compute the delta weight or delta W, right? And the way we do that very efficiently is we only care about this factorized version of our full weight matrix. So let's look and see what this looks like in the code. You'll see that we just pass in R, 64. That's great. And then we pass in our lower alpha, which is 128. Now, the kind of conventional wisdom here is that alpha should just be twice R. There's an incredible blog post from the Lightning AI folks where they did many different experiments. This is not always true, but it's a great place to start. You might want to explore different ratios, but for the most part, your alpha should just be set to twice what your R is. And alpha is just a scalar. Now, for target modules, again, we kind of have a decision here. We can let PEFT take the wheel, right? If it's a model that's already existing in the PEFT library, or we can target specific modules. For this instance, we're just going to let pef take the wheel, and it's going to choose K, or Q and V, and it's going to omit K. But we could manually add K as well as whatever MLP layers we wish to add as well. Totally up to you. We have this dropout. Dropout is the important thing to prevent overfitting. We're not using a bias and our task type of course is causal lm since we're doing a causal lm task next we need to prep our model so the first thing we're going to do is prepare model for k-bit training um this is great we want to do this and then we're going to get our peft model also great also we want to do this there's a lot going on under the hood here. But what it boils down to is just moving the model into a format that's compatible with our desired training. Then we're going to print our trainable parameters just to see. You'll see we go from a lot of parameters, right, to 2.7 million parameters. And that represents 0.72% of our total trainable parameters. So we are only training 0.72% of the possible parameters we could train using this method. That's why it's parameter efficient, right? This is how the idea of parameter efficiency comes in. Despite the fact that we're training on less than a percent of the total trainable parameters, we are able to get great results. So let's look at the model architecture to see exactly what's happened here, right? So we have our Qproj, which was previously just a linear. It's now been replaced by a LoRa linear, which has our base layer. That's that W, right? And then it has our LoRa dropout. Then it has LoRa A and B. LoRa A is the 4096 by 64, and LoRa B is 64 by 4096. If we were to combine these two matrices, we would get a 4096 by 4096 matrix, which is what we would desire since the input and output is 4096 by 4096. So the idea here, right, is that we're just breaking down our delta W into these two sub matrices. And all we do is every time we go through and we calculate what our weight changes should be, we update LoRa A and B, and then we combine them together, and then we are adding those, literally adding those to the base layer to keep track of our changes so that we can keep learning. That's it. Both of these are initialized. One is initialized as a random Gaussian, and the other is initialized as zeros. There's some intuition behind why we do that, but for the most part, that is the rule and that's what's going to happen. You can do kind of like more tactical initializations, but the base methods are just going to use the actual Gaussian and then zeros. You'll notice that K proj is still just this linear and then V proj also has ourRa layers. That's because it only targeted v and q. And that's it. That's all LoRa does, right? So now every time we update our weights, what we're really going to update is these LoRa sub-matrices. And then we're going to combine and add them to our base layer in order to get the final results. So how do we train this on data now, right? It's great. We kind of have a better understanding of how it might work, but how do we actually train on data? Well, it's the same as every other time, right? We're going to grab some data. We're going to create a data set. It's got 52,000 rows. We don't need all 52,000 for the sample. We're going to look at how this might look. So we have some instructions. We have an input. We have an output. That's great. We're just going to select a subset of 5,000 of these rows. And then we're going to convert it to this kind of format, right? Generate a simple instruction that LLM could use to generate the provided context. Then we're going to provide the context and then the response. The idea being that, you know, we want to, in this case, show the LLM the response and have it create the instruction, right? So this could help us to synthetically generate an instruction data set. to synthetically generate an instruction data set. For generating our prompt, this is a function that a library we're about to see is going to use to help us understand what each prompt is. So we basically feed it a row of the data set and it generates a prompt in this format. That's all this is doing. Now we'll be mapped over our entire data set and it will result prompt in this format. That's all this is doing. Now we'll be mapped over our entire data set. And it will result in something like this. Create a simple instruction that could result in the provided context. Then we have the instruction flag, and then we have a bunch of context. Then we have the end instruction flag, and then we have our response. And that is it. Once we have that set up, we can move to creating our training arguments. This is all boilerplate. It's a lot, but it's all boilerplate. Basically, we're going to have our output directory. We're going to train for 100 epochs. We use a batch size of four. If you're doing this in a T4 instance or the free collab version, you might want to reduce this to two or one in order to prevent out of memory issues. We're gonna use this gradient accumulation step. This is kind of a way to cheat out a higher batch size. We're gonna use gradient checkpointing as well as the paged AtomW32-bit Optimizer. This is straight from the Qlora paper, which is just a good optimizer. This is straight from the QLORA paper, which is, you know, just a good optimizer to use. We're going to have a fairly aggressive learning rate. And then of course, we're going to set our D types. We're going to talk about this more next week. But for now, the idea is we want we need to be able to ensure that we're computing things in the correct D type. And then we have some remaining kind of boilerplate. Again, these are all hyperparameters. You will want to fiddle with these if you're producing a model for production use in order to ensure that you have the best possible result. Then we're gonna use our TRL SFT trainer. SFT trainer just means supervised fine tuning trainer, right? Easy peasy. We're going to pass in our model, our data set, our LoRa config, pass in this max sequence, which is just going to be 2048. You can set it to whatever you'd like, as long as it is less than or equivalent to the max sequence length of your model. We're going to pass in our tokenizer, and then we pass in that generate prompt function we built above so that it can be mapped over our data set. And then we're going to pass in the training args that we just set up above. After that, all we got to do is everyone's favorite thing to do, which is.train. You'll notice that the loss slowly goes down over the course of training. So we're, you know, we get down to 806 and then we're down into the double zeros. So that's great. It does plateau fairly quickly. And then we just kind of keep training. So likely this is overfit. We're going to go ahead and save this model. Now you can't really, it's interesting, right? You can't really run inference with just this, right? So this model is a LoRa model. We'll need to set up a auto PEFT model. Now that PEFT model is going to leverage the adapter that we, or the, the, those weight matrices we wound up training, right? So you can't run inference on the newly trained model. You have to convert it to a format that is suitable for inference. Now you could either merge the weights that you trained in, which is a basically, you know, you take that delta W, that's the combination of those two smaller matrices, and then you just plop it on top of the base layers. Or you can do that process during inference. Now that last sentence, right? Do that during inference is what makes LoRa a absolutely insane approach to use in production in terms of the flexibility, right? So you can fine tune the same model on a thousand different tasks. It's just a little bit hyperbolic, but the idea is you can find you on a number of different tasks, right? And then at inference time, you can decide which model you use. So you only ever host the base model, and then you get to choose which combined matrices you use at time of inference, which is what makes this such an incredible tool for a production environment and such an incredible tool for hosting models. And all you have to do is use this auto PEF model for causal LM. And then it's going to load up that model with the, with the additional PEF adapters. We use the word adapter here loosely, because there is a method called adapters. This is distinct from that, but it is the same idea. It's this entity that we keep track of that kind of is what we train, right? The same way that you download model weights, you can download your LoRa weights. So let's look at how it did. So we generated a simple instruction that could result in the provided context. And the context is the odd one is Telegram, Twitter, and Instagram, social media, blah, blah, blah, blah, blah. And it says, identify the social media platform that is not an instant messaging service. And then it lists three options. And that is a very much more verbose, but it is absolutely the kind of instruction we were looking for. Now, how did the base model do? So it's great that the fine-tuned version did well, but how does the base model do? Well, the base model does pretty poorly. It says, it identifies the platform that's primarily used for instant matching and voice over IP. Instead of sharing information, the OO is Telegram. So it doesn't do a terrible job, but it definitely does a worse job it doesn't provide the uh lists of options and so therefore how can you identify the odd one out in any case the idea is even with only 100 epochs in collab only using a maximum of 15 gigabytes of VRAM, we're able to fine tune a 7 billion parameter model to be better at a task. You know, this whole process took very little time and we couldn't choose which of these fine tunings to use at time of inference, which is a, cannot be overstated how powerful it is. But that's all for me. So with that, we're going to pass it back to Greg and, uh, I'll see you guys in a bit for some Q and a man, Chris. Thanks so much. That was deep and super relevant to anybody building those production LM applications this year. Love to see how much of this is really one-to-one from prototyping into production in case you guys missed it we just trained only 2.7 million of a possible 3.8 billion trainable total parameters and that's less than one%, 0.72%. And just to kind of try to put this together a little bit in your sort of matrix algebra, linear algebra minds, remember we're training only the attention layers, right, within the transformer, and we're training only Q and V. So Q has dimensions, same as the the attention layer 4096 by 4096. So we had Laura A and Laura B of these sizes. Remember Chris chose 64 dimension from the paper. And then for V, it's got a dimension 4096 by 1024. So Laura A and Laura B were slightly different sizes so this is kind of how all this comes together again those adapters can be plugged right in even at inference too that's really cool and so you know in conclusion we got some great results and we learned that PEFT Laura fine tuning is all about that modifying LLM behavior by updating fewer parameters with those factorized matrices. Fine-tuning is going to be a heck of an important skill in 2024 as it helps us leverage only the essential, those intrinsic dimensions for our downstream tasks. And as the age of small language models comes upon us, it's gonna be one of those things that we're asked to do more and more. As we've mentioned a couple of times, next week we are going to cover quantization and QLORA. We'll talk about things like bits and bytes, and we'll talk about quantization in general and the trends towards smaller and smaller models. We'll continue our discussion of that is not going anywhere in 2024. So with that, love to open it up for questions and Chris, come on back up and let's get it rocking and rolling. Feel free to drop your questions in the Slido or in the YouTube chat and we'll do our best to get to them. Chris, that was awesome, man. I think they liked it too i love laura it really does the thing it does the thing yeah it's uh it's it's so interesting i think it's one of the few things that has kind of stuck around the vast majority of last year we didn didn't go anywhere. Nobody budged it, except the same people that kind of came out with it, but kept coming out with better stuff. So, you know, that's what we'll talk about next week. So Deb asks, could one use LoRa to fine tune a very large image recognition model so that it recognizes a few cases of interest on which they have not been trained? Yeah. So you should, I mean, so LoRa is not only applicable to language models. It does see a lot of uses in models like kind of a stable diffusion-esque models, so image generation models. I would assume that you could do this. I haven't. And so I don't want to say super concretely how effective it would be. But I mean, you could apply LoRa to basically anything. apply LoRa to basically anything. It is not specific to any architecture. You know, that's why we can apply it both to the attention weights as well as the MLP weights with with little, little issue. Yeah, yeah. So that idea of finding new stuff it hasn't been trained on maybe maybe just if it's sort of the infinite object detector already, sort of why not kind of thing. Yeah. So one of the things you can do is you can fine tune object detection algorithms to recognize better or certain classes more robustly. And I'm certain you could use LoRa to achieve that. Okay. Okay. Yeah. So try it out, Deb. Let us know for sure. David Reitman asks us, asks you, to define great results with PEFT, I believe you said. And we got great results. As one would assume, if you're only using a subset there, there's some sort of loss on the output quality? Is there? Ah, yes, but also no. on the output quality is there ah yes but also no uh so yeah it i would say for the most part laura is uh it comes kind of without a specific performance hit uh it will it will compromise your model's ability to do kind of, all of the tasks it originally was able to do a little bit, but in terms of like the base functionality of a language model, which is generating text, there's no real performance hit. So, if you had a model that was before good at math and prose and technical writing, let's say, and you used LoRa to make it really, really good at technical writing, there is the possibility that it might be worse at the math, right? But it's not going to be much worse. That's what was found. And that's, again, due to the fact that these behaviors exist in a low dimensional space that we can manipulate very readily and easily. Nice, nice. Okay, Ali asks, should we always use a fine-tuned model instead of prompting for production-ready applications, considering that accuracy plays a crucial role? Should we always be fine-tuning, Chris? What do you think? No. There's no fast rule for this. So this is my opinion, but there are lots of times that you do not need to fine-tune, right? In fact, when, when prompting is enough, there's no real reason to fine tune, right? I mean, if you get very good evaluations on whatever you're building without any fine tuning, so prompting, you know, achieves your kind of benchmark targets, then there's no, there's no reason, achieves your kind of benchmark targets, then there's no reason to do anything other than just prompt the model. There are definitely lots of cases where you will need to fine tune, but if you don't have to, then why spend the cash? 100%. Yeah. And we were sort of covering this idea of how to prototype. So you start with the prompt engineering, maybe move to RAG system, and then maybe look at fine tuning. Now, there's some special cases in between, but fine tuning is sort of, let's say, when you're mostly there, right? You know, just fine tune a little bit. And you can get mostly there through some other methods, but definitely it's going to be one that comes up if you're messing with production ready applications this year, Ali. So for students to experiment with these fine tunings and this learning, what would be your suggested, I guess, stack for them to be playing with? What do you think, Chris? It's all in the dependencies in the notebook. That's all you need to get started. In terms of the computing solution, Colab is just good enough, right? In terms of experimentation or like a playground, Colab is good enough. It is, especially if you pay for Pro or you pay for compute units depending on which is a more uh a feasible solution for you and you don't even because what we're talking about inherently right is trading these big models but using much smaller resources then you don't even need like you can use the free version with a batch size of one to fine-tune a mistral seven billion right so it's I would say like collab is a great place to get started it's really easy you don't have to manage the environment it's all done in the cloud and you don't have to like do any weird signups or anything like you don't have to request quotas you don't have to worry about cost while the thing is not running, right? So Colab is a really, really great solution. That being said, running these things locally on a consumer grade GPU, right? Like a 3090 or whatever, or 2080 with 16 gigs of VRAM or even a Mac, right? And Mac just, or Apple just released like a bunch of Mac libraries that make it even easier to run these models on a Mac. Like you can do all, all of this locally if you really wanted to, but if you don't want to fight with environment setup, I would, I mean, stick Colab's great. It's so good. Yeah. Yeah. Yeah. Cause maybe, maybe what the real question is here, it was like, should I be using AWS? Should I be using, you know, should I be using one of these big cloud providers? Which one? And it's like, it's like, well, no, not really. To get started, you really don't need to worry about that. Okay, I'm going to go to the question now from the chat here from Vadi Yala, because I think it's super interesting and relevant to what's happening today. Is it possible to Laura fine tune with the DPO method? We got some lines we can help discern here, Laura versus DPO. You got any hot takes on DPO, Chris? DPO is good. What is it like better RLHF in a way? Not really, not really. But it is, you know, it's, it's a great solution. It's a great thing to do instead of doing like PPO. I would say for the most part, these are compatible systems. You know, you can do PPO with LoRa, you can do DPO with LoRa, you can, I mean, again, LoRa is not specific to a technology. It's just specific to the fact that there is this weight matrix that we know has intrinsically low rank, right? So anytime that's true, you can just plug LoRa in there. In terms of like. DPO versus Laura, I wouldn't think of them as versus or in two separate camps, I would think of them as compatible and synergistic technologies like they are not two distinct things in terms of like you need to choose between them. they can synergize and so that's great. Nice. Nice. Okay. Quick question. Input and output embeddings in the transformers, the encoding and decoding blocks. I said the common ones we use today are decoder only. I did and that's sort of based on this idea that the encoder only style is only used for just a few applications uh today and generally when we see those gpt models we see llama 2 we see mistral we see all the latest and greatest new ones these are decoder only style we've got sort of a long form discussion of this that we've done in the past that we'll go ahead and drop in the chat for you now to sort of dig in to transformers more deeply but that's sort of enough on that question for right now i just want to go to maybe one or two more we got somebody says chris hey i'm quite new here i'm into finance and a sample use case is portfolio management. How would I tune a model to that domain? Finance, portfolio management. Well, there's lots of different ways. I mean, the idea is start with a model that is good at that. So this is kind of the wisdom of our friends at RCAI, right? This idea of using a domain-specific model and then adapting that model. So you might want to use like a Bloomberg GPT or whatever you can get your hands on that's open source, that's equivalent. And then, you know, find your data set or build your data set synthetically and then use laura from there to fine tune uh but the idea is you will get better results if you use a model that's closer to your actual desired behavior remember greg just said it but he said during the presentation and it's so so dang true right we are fine tuning so we're close you know fine tuning only works if we're close, you know, fine tuning only works if we're close to the target, right? If we try to fine tune from here to way over here, it's not gonna happen. So we wanna make sure we start very close to our target and then we fine tune that last few millimeters. Yeah, so I would say start with that and then you can use LoRa, there you go. There you go, yeah. And I would say start with that and then you can use LoRa. There you go. There you go. Yeah. And I would say too, the sample use case being portfolio management is a little bit too generic sounding to me. If you can really dial in which aspect of managing portfolios that you can try to say, okay, this simple task associated with portfolio management, maybe the most annoying task that people have to do. Maybe it's the selection of stocks within even a particular, you know, a particular domain of, you know, the world and of the stock market, like as clear as you can be on the task, that's gonna make it easier to curate the data. That's gonna make it easier to do the fine tuning. That's gonna make it easier to get most of the way there through prompt engineering and rag and and some of these other techniques so yeah great questions everybody thank you so much for joining us kicking off the new year strong Chris thank you so much for dropping the knowledge on us with status we'll see you next time man thank you everybody for joining us live today and this brings us to the end just for today until next week when we talk We'll see you next time, man. Thank you, everybody, for joining us live today. And this brings us to the end just for today. Until next week when we talk quantization and we talk QLORA. Next week, also on Tuesday, January 9th, we'll be launching our LLM Engineering Cohort 2 course. This is our course that teaches everything you need to know to build and train your very own LLMs from scratch, including the transformer, attention, fine tuning, all the different aspects and types of fine tuning, prompt engineering, RLHF. You'll be doing it all and you'll be building your very own LLM from scratch as a capstone. Check it out if you're into that kind of thing or reach out to any of us personally on LinkedIn or wherever you can find us, Greg at AI Makerspace, Chris at AI Makerspace.io. Other than that, please share any feedback that you have on today's event that might help us bring even more value to you in the future. And until next time, keep building, shipping, and sharing, and we'll do the same, everybody. See you soon.
Fine-Tuning Mistral-7B with LoRA (Low Rank Adaptation)
3,675
AI Makerspace
20240104
GPT-4 Summary: Dive deep into the innovative world of fine-tuning language models with our comprehensive event, focusing on the groundbreaking Low-Rank Adaptation (LoRA) approach from Hugging Face's Parameter Efficient Fine-Tuning (PEFT) library. Discover how LoRA revolutionizes the industry by significantly reducing trainable parameters without sacrificing performance. Gain practical insights with a hands-on Python tutorial to adapt pre-trained LLMs for specific tasks. Whether you're a seasoned professional or just starting, this event will equip you with a deep understanding of efficient LLM fine-tuning. Join us live for an enlightening session on mastering PEFT and LoRA to transform your models! Event page: https://lu.ma/llmswithlora Have a question for a speaker? Drop them here: https://app.sli.do/event/cbLiU8BM92VixUStDH24U3 Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for the LLM Ops Cohort on Maven today! https://maven.com/aimakerspace/llmops How'd we do? Share your feedback and suggestions for future events. https://forms.gle/U8oeCWxiWLLg6g678
2024-06-13T21:47:21.636659
https://www.youtube.com/live/Azfc-TjG9Tg
Hi everyone, and welcome to Langchain, how to build chat GBT for your data. My name is Greg Lockman, and I'm the founder of the Machine Learning Makerspace, a brand new online learning community focused on empowering data scientists and machine learning engineers to build generative AI and LLM applications that create real value. We appreciate you taking the time to join us for the event today. Please share in the chat where you're tuning in from. We're so happy to see you at our first kickoff event. During our event today, you'll learn not only how to build a chat GPT-like interface on top of a single document using Langchain, but you'll also learn what it takes to build a multi-document question answering chatbot complete with agent memory and backup Google search. If that already makes you feel overwhelmed, don't be. We're going to take it step by step, one piece at a time, build everything up just like Lego blocks, and we're going to take it really easy when we get to the super advanced part. If you hear anything during our event that prompts a question, please follow the Slido link in the description box on the YouTube page. We will do our best to answer the most upvoted questions during the Q&A portion of today's event. Without further ado, I'm so excited to welcome my good friend and colleague Chris Oleksiuk to the stage today, as we'll be working as a team to deliver this lesson. Chris, what's up, man? Hello. Yes. How are you doing? Very excited, you know, to to be here. Yes. Yes. Chris is the founding machine learning engineer at Ox, an experienced online instructor, curriculum developer, and YouTube creator. He's always learning, building, shipping, and sharing his work like a legend. He embodies what we aim to build here at the Machine Learning Makerspace, and I couldn't be more excited to share the stage with him today. One quick note on the flow, I'll share concepts and Chris will do the demos. So with that, we're going to get right into it. the demos. So with that, we're going to get right into it. Welcome to today's event. Hey, Chris, what do you say we tell them a little bit about where we're headed with the data that we'll be chat GPT-ing today and share a sneak peek of the final product? Absolutely, Greg. Today, we are going to be heading down the rabbit hole. So we're going to be looking at some of the texts produced by Lewis Carroll. So the Alice in Wonderland series more specifically. So we're going to be using that as the documents that we're going to query across and chat with. And with that, we have an agent that helps us do that using land chain that we've named the Mad Hatter and so we'll just ask it a sample query here which is something like how can you avoid potential pitfalls of falling down a rabbit hole you can see that the system uses our agent as well as our supplemental chains in order to produce a response and that eventually will output a response to us unfortunately there's no specific information available how to avoid pitfalls of falling down a rabbit hole in Alice in Wonderland we know that our main character does fall down the rabbit hole so that makes sense many lessons to be learned and many things to ask the mad hatter from here um chris thanks so much for demoing that let's see how we built this thing up piece by piece so first off when we're talking about laying chain we're talking about chains chain, we're talking about chains. This is the fundamental abstraction innovation that we want to keep in mind at all times. And beyond that, we want to kind of build things up with core components of the laying chain framework. To do our single document question answering, we need a few pieces that are going to be really common for really anything that we build. We're going to need a model. We're going to need a prompt template. We're going to need to use a chain, maybe multiple chains, and we're going to need to create an index. When we get into multi-document question answering, when we get into agents, when we get into doing things that are a little bit more complex. We're going to add some additional layers, some additional pieces, but really fundamentally these same core pieces, these core components are going to remain just the same. So we're going to spend most of today's event focused on those. Most of today's event focused on those. First up, Langchain is all about combining LLMs with other sources of computation and knowledge to really create those complex applications. This is the purpose. This is what it's doing for the world today. This is what you should be thinking about having it do for you. So how can you leverage an LLM to create something even better with chains? A chain is nothing more than a sequence of calls to other components, including other chains. So it's a very generic abstraction, but it's also a very useful one. The single document question answering is going to essentially require three simple chains. And we'll see how these are built up in the following slides and demo. We're going to use the core components that we talked about in the outline, models, prompts, chains, and indexes. And we're going to take them one at a time. First off, we need a model. And the type of model that we're going to use is called a chat model. This chat style model is a little bit different than maybe just the way you've been interacting with LLMs in an input output text oriented way so far. Instead, the chat style, the I.O. schema that we're using is going to be an input of a list of messages, of chat messages, and the output is going to be a single chat message. Now, as we get into the chat model, we need to sort of differentiate between the types of messages that we're going to be using. The way that we do that is we define what are called roles. And we have the roles outlined here in yellow. We're seeing we have the system, the user, and the assistant roles. These are core, these are fundamental. This is directly from open AI, and this is how we leverage Langchain. It's thinking about these roles. So diving a little bit deeper into the roles, the system role is going to be that thing that provides a little bit of context, provides maybe a voice, maybe a user persona, maybe a character. We're sort of telling it a stance to take some place from which to answer the question. In this case, you are a helpful assistant is the system message. The user message is simply the user of the program. Could be you, could be anyone else using the application that you've built. And the assistant message is essentially the AI message. It allows us to act in place of the AI answering questions, effectively producing a one-shot example and a prompt input. Okay, so let's go a little deeper on that idea. After we just really note here that OpenAI and Langchain use very similar terminologies, but not exactly the same. So the system message is the system message, straightforward. User message in OpenAI is called human message in Langchain. And the assistant message is called the AI message in Langchain. Again, this allows us to provide outputs from the perspective of the AI that we are interacting with, essentially providing a few shot example. So let's check out how this works with some Kraft Mac and Cheese. Chris. Yes. Okay. So we'll pop into the code here. And I mean, first things first, we just want to be able to interact with our LLM using Langchain. So we're going to, in order to begin doing that, we're first going to have to grab some dependencies. So we're going to start with OpenAI and Langchain. Since again, like Greg said, we're going to be leveraging the OpenAI endpoint here. We just have to set up our OpenAI API key. So this is to ensure that we have access to the OpenAI endpoint. We have this helper function. This is just to display the text in a way that's nice to look at. It's not doing anything fancy though. And down here is where we first set up our chat model. Now, the first thing to keep in mind is that because we are using a chat model, so this is GBT 3.5 Turbo, we do have to make sure that we're using chat OpenAI from Langchain's chat models. And this is all we have to do to set up that chat model is just instantiate it with the model name as GBT 3.5 Turbo. Now we can leverage what Greg was talking about, right? So when it comes to the language that is used between Langchain and OpenAI, we have the system message, user message, and assistant message that Greg was talking to. In Langchain, that's the system message. And then the user message is instead referred to as the human message. And the assistant message is referred to as the AI message, but they're the same thing. So you don't have to worry too much about that. Just a naming convention. You'll notice that in our system message, we can input some text. So we have content, and it's you are a food critic. We have some content in our human message or our user message, which is do you think craft dinner constitutes fine dining? And then we have our assistant message or AI message in line chain with the content eGADS. No, it most certainly does not. So you can see that we've set up the system message, the user message, and the assistant message. We do this to guide how the LLM is going to respond when we give it a second user message, which is, what about Red Lobster? Surely that constitutes fine dining. We just need to combine these messages into a list. So we have our system message, our first user message, which was this, do you think Kraft Dinner constitutes fine dining? Our assistant message, which was the response that we're prompting it with, and then our second user message, which is meant to get a response from the assistant, or in this case, the OpenAI endpoint. We call that chat model that we built above with this list of prompts, and we can get a response. Ah, Red Lobster. Well, it may offer a casual dining experience with a seafood focus, I wouldn't classify it as fine dining. Fine dining typically involves a higher level of culinary craft. I mean, it goes on. The idea here is that we're able to provide all of this context or LLM in different and cool ways in order to guide how it's going to respond to us and how it behaves. So we'll go to Greg to learn about the next concept that we'll be leveraging. Awesome. Very cool, Chris. So Red Lobster, casual craft, not fine dining. Check. Single document QA prompts is what we need to do next. So we need to look a little bit closer at how we do prompting beyond the idea of this chat model. And let's sort of recall a couple of prompt engineering best practices. Many of you have done a lot of prompt engineering so far, and the core thing that we need to always keep in mind is give specific instructions. Now, beyond that, we want to provide some context. That's always providing some sort of role or voice or character, a place from which the AI can stand. Again, in this case, you are a helpful assistant. And then additionally, we want to really specify the input when we're doing prompting, right? Zero shot gives us some decent results sometimes, but when we can give it an example or two, that kind of gives us better and better results. So that's where we see in this OpenAI example, we're giving it one example output in the assistant role here. Who won the World Series in 2020? The Los Angeles Dodgers won the World Series. So we can ask it a follow-up question. All right. So as we get into sort of a prompt template, this allows us to do all of this a little bit easier and allows us to kind of do things over and over without replicating, copy pasting, prompt stuff all the time over and over again. So it's really a straightforward tool that you're going to have to use pretty much every time you go to try to build anything. In this case, we've got, you are an expert in subject and you're currently feeling mood and we can provide any sort of user prompt simply with this content here. So Chris, walk us through how this prompt template works and a quick example that we're going to do on sparkling water. You bet. So as Greg was discussing, one of the things that Langchain is best at, right, is reducing boilerplate. So it does this in a number of ways, both reducing boilerplate code, so code that you typically of ways. Both reducing boilerplate code, code that you typically have to write a lot of times, as well as reducing prompt boilerplate. In this case, we're using prompt templates in order to build, you know, kind of prebuilt prompts that we know are going to be effective for the task at hand, and then modifying them on the fly with specific user provided information so you can think of it like we're building an f string almost in python that we can uh get it set up properly by including these additional pieces of context so like we discussed you are an expert in subject and you're currently feeling mood. So the things between these curly braces are going to be replaced by user provided context. And the way we make that happen is by using this system message prompt template from templates. And then we provide this prompt template here. You'll notice as well that these do have roles. So this is the system message prompt template. So this is the system message prompt template and this is the human message prompt template. So you can have a prompt template for all of the different roles that your LLM has. We're going to build the human message prompt template to just be content. So it's going to be the user's question or query and that's all we're going to pass along to the LLM. In order to make this work in one shot, we need to make sure that we create chat prompt template from messages. These messages are these two templates we've already set up. So there's a system prompt template and a user prompt template. And this is familiar to what we saw above when we created this list, except this time we're able to format whatever we'd like in place of these variables. So let's see an example of doing that. We're going to use the format prompt method to format our subject to be sparkling waters, the mood to be joyful, and the content to be high. What are the finest sparkling waters? We're going to send this to messages just so we can send it directly to our LLM chat model. Now, all we're doing here is saying the subject becomes sparkling water. So you are an expert in sparkling waters and you're currently joyful. That's what we're doing. We send that to our chat model And we display it so it looks decent in Markdown And we get this response, right? So as an expert in sparkling waters I can assure you there are plenty of wonderful options to choose from We have Perrier, San Pellegrino, Topo Chico Gerolsteiner, and LaCroix And that's basically how we do it, right? We could substitute anything we wanted for that subject or mood, and we don't have to rewrite the prompt. We don't have to do anything like that. Lane Chain handles that for us through the use of the prompt template. We'll pass it back to Greg to explore the next concept we're gonna be leveraging. Nice. Very cool, Chris. Great to see how that prompt template can help us make decisions about not just what to eat, but what to drink as well. Let's see, as we get into actually chaining things together, this is no more complex than simply putting things together. So our LLM chain that we're going to use is the most popular chain that you're going to come across within Langchain. It's simply taking our chat model and taking our prompt template and linking those two things together with a code that really is as simple as what you just saw on screen. Chris, let's see how this works exactly. So on screen, Chris, let's see how this works exactly. You bet. Just as Greg said, this is very straightforward. We are just looking to chain our prompt into our LLM. So again, this is to reduce boilerplate, right? Sure, we could just wrap this in a function and call it whenever we needed it with the LLM, with the chat model. So just wrap this all. Instead though, we can with the LLM, with the chat model. So just wrap this all. Instead, though, we can build a LLM chain, which is going to have that prompt, have knowledge of that prompt and chain it into our LLM, as well as returning the response through our LLM. So in order to build this, all we have to do is provide our chat model, which we created in the first demonstration, and then our chat prompt, which we created just a moment ago, and put them into an LLM chain. Now we can call chain.run, which is going to run our chain, unsurprisingly. We can include our subject, our mood, and our content. So this time we're saying sparkling water again, just to stay on theme, but the mood is angry. And we're asking, is bubbly a good sparkling water again, just to stay on theme, but the mood is angry. And we're asking, is bubbly a good sparkling water? To which it responds, bubbly? Are you kidding me? That stuff is a disgrace to the world of sparkling water. There's nothing more than cheap imitation trying to ride the coattails of true sparkling water brands. The flavors are weak. The carbonation is lackluster. And don't even get me started on the aftertaste. So watered down disappointment. So this is the idea, right? We're able to modify these things on the fly and we're reducing the amount of boilerplate that we have to write when we're creating these applications. And while it's very straightforward with this simple example, right? Once it gets more complex, you're going to need to leverage these tools in order to effectively keep track of your prompts and how information is flowing through your application. So we'll pass it back to Greg to learn about the next concept we'll be leveraging. All right. So down the sparkling water rabbit hole, we found out that bubbly is disappointment and it can so moving on we've got indexing and this is really where a lot of the magic happens outside of just tapping into the llm when we're talking about building applications so we need a couple of different components here but this is where we get data centric this is where the data comes in and this is where we get data centric. This is where the data comes in, and this is where we put it into a form that the LLM can interact easily with. We're going to need things like document loaders, text splitters, text embedding models, vector stores, and retrievers. So let's try to break down some of this terminology before we look at the code to see if we can get sort of the big picture of exactly what we're doing. You know, when we're creating a question answering tool to look through documents, we essentially need to first create our index. And our index is our vector store. It is our vector database. In fact, a vector database is simply one type of index. It's the most common kind. So we're taking our documents, we're splitting them into chunks, could be one document, could be many documents. We're creating a ton of embeddings, which is essentially turning those words, in the case of the documents we're looking at today, into numbers and into vectors. looking at today into numbers and into vectors. Then we're storing those vectors in the Vector Store. Simple enough. The Retriever allows us to then search that Vector Store. It's an interface that allows us to query the Vector Store and really get a response of what is most similar to what it is that we're looking for that exists inside of our data. And that's the question answering chain is kind of the index and the retriever store. So let's double click in a little bit on index here. An index is a generic term, so don't be scared of it. It's simply a way to structure documents so that an LLM can interact. But really the index that you probably care about today is the vector database, aka the vector store. This is just what, you know, we just kind of talked about where it's numbers, the vectors are being stored. kind of talked about where it's numbers the vectors are being stored and the lang chain default is to use chroma db that's the one that we are going to use today and we will share a link with you in the chat a little bit more on why lang chain chose chroma db as the default lang chain supports a ton of other vector databases and that that's something that we're gonna get into in future events and in future community content. But for today, we're gonna focus on ChromaDB, the simplest possible index, which is a vector store. And as we build up a single vector store, sort of the canonical vector store steps are we're going to load documents. We're going to split up the text. Now, splitting of the text is more of a black art than a science. So Chris will kind of walk us through that. We're going to create embeddings from the text. We're going to use kind of the industry standard today. The OpenAI Ada embeddings model is what we're going to use in our application. And then we're going to store the vectors. This vector store is kind of the backbone of the retriever, which simply wraps around the vector store and allows us to query in natural language something that we're looking for. And it looks for something similar inside to take out and it does this really really fast that's the benefit really of the vector store and timely that it does it really really fast and not surprisingly, we're going to ask it about why the rabbit is running so late as Alice chases him down the rabbit hole. Chris, let's see how this works to put all this together. Yes. Okay. So first things first, we have to get our document in, right? So in order to query across documents, we have to have some documents. We're just going to go ahead and W get one, which is going to be the first Alice in Wonderland book by Lewis Carroll. She's just going to name Alice underscore one dot text. Essentially, first things first, every time is you have to get the data to Python. We're going to load this into memory using just classic Python here. Nothing lame, Chan, about this. Once we have it loaded, we can go ahead and start thinking about splitting it. There's a number of ways we can split text. And we kind of need to split text, in most cases, anyway. Because LLMs don't truly have infinite context windows. So you can't shove everything into the context window of an LLM. So the Character Text Splitter is gonna help us break our data down into bite-sized chunks that retain as much information as possible, but are otherwise just there to ensure we can grab only the most relevant context we don't need a bunch of other context as well if we include too much context it could potentially confuse the llm and so we want to avoid that as well we're just going to use the character text splitter here we're going to go ahead and we're going to split on the new line character and we're going to split on the new line character. And we're going to have a chunk overlap of zero, as well as a chunk size of 1000. Now, what this means is that we're going to every time there's a new line, we're going to potentially split if what's after the new line is too much for our context window. So you can imagine we're just splitting apart by new lines until we have, you know, the most information in this 1000 character length window. All we have to do is call dot split text on Alice in Wonderland. And it goes ahead and splits it, we get 152 resulting chunks. So this process is also called chunking once we've finished chunking we can go ahead and create our open AI embeddings model now this is as easy as just pulling it from blank chain we have embeddings.openai and then we get the open AI embeddings we're just going to use that Ada embeddings model from open AI's endpoint as as Greg was discussing. We have to get a few dependencies since we are going to be using ChromaDB. And as well, we're going to be using TickToken in order to tokenize correctly and make sure we have the right number of tokens when we're actually running this through our embedding process. In order to embed, though, all we have to do is call Chroma from our vector stores. We use dot from texts on it. We pass our text. That's our 143 chunks. We pass the embeddings model. And we just add some metadata. This metadata is just associating each passage with its number in the 143 long sequence. passage with its number in the 143 long sequence. And then we of course have our retriever, which is the retriever wrapper as Greg discussed, so that we can query this in order to get relevant documents from our vector store. Again, we're storing everything as numbers. So any information that flows into this will be embedded. And then it will be the relevant text will be extracted. So you're going to be able to interface with it through text. We can see an example of this by asking it what is the rabbit late for? We use the get relevant documents method on our retriever. And we just pass in our query. And we can see that it finds some context that's, you know, relating to the rabbit being late says, Oh, the Duchess, the Duchess, won't she be savage if I've kept her waiting, right? So we know that the Dutch is going to be mad if that rabbit is late. Finally, we're able to integrate this into a QA chain, Um, finally, we're able to integrate this into a QA chain, which is again, built using an LLM and we're using this chain type of stuff, which just means we're going to stuff all of the relevant context we found into the prompt. Um, so that our LLM can leverage it as potential context. And then we're going to pass our query. We're going to run our input documents, which is our docs extracted from get relevant documents. And that's going to be about it. We can call this chain using chain.run. And we know that the rabbit was late for something, but it's not specified what he was late for in the given context. So this is basically putting all the steps we discussed earlier together. We have our chain, we have our prompt, we have our retriever, and finally, we get our response. So we'll pass it back to Greg to continue on with learning. Yeah. Thanks, Chris. And it's really interesting to see how we kind of have to be real specific we have to sort of learn exactly which context to use uh how exactly to interact with the data how we're chunking the data matters uh each piece of how we build up this application really does affect the user experience. And so it's really, really interesting to see all of this kind of come together. This is, in an image, what we just built, this single document QA. So we enter a query. We use our templated prompt. It looks for the answer within the vector store that we created with ChromaDB, that we took our documents and we chunked and converted to embeddings before we put in there. And the interactions with the LLM occur on the search and as well as when we're doing our prompt. So this is sort of the simplest, one of the simplest, most common things that you can do, the single document QA, and this fundamentally is a natural language interface for your data, chat GPT for your data, right here in an image. And so what we wanna do today is we want to give you some insights into how to build a much more complex application, but we're not gonna go through each piece of code. We are gonna share the code, but we're not gonna go through every single piece of it because of time constraints on the event today. However, if you do have questions, please do share them in the Slido and we will get to them at the end, or you can wait till the end to ask them in Slido as well. We should be able to answer any and all questions that you have today. So with that, we're gonna take it to the next level here. And again, we're getting more advanced. We're gonna to do a multi-document QA with Google search chat bot. All right. So we're going to add, and we're going to share the Colab notebook with you now, although we're not going to go very deep into it. Chris is going to produce a Colab notebook run through post event that we will share. Okay, so this is a little bit more complicated, but again, we have the fundamental chains, the prompt chain, the tool chain, the data indexing chain. We just have a few extra pieces in each. So what are those extra pieces? Well, before we get to that, let's talk about agents for a second. This is one of the most confused concepts within LLMs and within generative AI right now. And I think one of the best ways to think about it is with a quote that I pulled from a book that I was reading recently called Complexity. Agents is sort of a generic term. Agents might be molecules or neurons or species or consumers or even corporations. Molecules form cells. Neurons form brains. Species form ecosystems. Consumers and corporations form economies and so on. At each level, new emergent structures form and engage in new emergent behaviors. Complexity, in other words, is a science of emergence. And what are we doing today? We're using Langchain to build complex LLM applications. Agents are a key way that we can take our complexity to the next level. Agents in Langchain are simply in a similar way to indexes. Langchain talks about these. They are used for applications that require a more flexible chain of calls to LLMs and other tools. Agents essentially have a tool belt. They have access to a suite of tools. You can think of Agent Smith from the Matrix with a tool belt, although it's not a perfect analogy. Which one to use in the tool belt is based on the user's input, and the output of one tool can be the input to another. There are two types of agents in Langchain. We're going to focus on action agents to build our Mad Hatter agent chatbot today. Now, the action agent simply receives input, decides which tool to use. In this case, we're either going to leverage a search of our index, or we're going to do a Google search. And it calls tools and records outputs. It decides the next step to take based on the history of the tools, the tool inputs, and the observations. Now, this history piece is what requires us to add a little bit more to our tool chain, specifically the memory buffer. We have to remember what we were doing so that we can select the next best step. In addition to the memory buffer, we're also adding Google search to the tool chain. And in addition to everything else, we're adding multiple types of documents. So this is different file types and multiple documents to our data indexing chain. But again, fundamentally, we're going to use data indexing chain, a tool chain, and a prompt chain. And that's really the key components that we need to add. There is some additional complexity when we go to implement the code. But what we want to show today is we want to show how this all comes together when we build not just in a Google Colab notebook, but when we actually create a chainlet application, a chatbot-like interface, a true chat GPT for your data interface on top of this multi-question answering agent system that can also do Google search. So Chris, can you walk us through a few examples? So maybe we can learn not just what the Cheshire cat is up to, but maybe if we can interface a little bit with that agent to learn how it's working without digging too deep into the code for today's presentation yeah of course yes i mean really quickly i'm just going to go through a couple of the concepts that we're adding so we added more data it's basically the same process we did before but you add new data uh we're persisting that vector store it's great the first real thing we're going to leverage in order to get that chat GPT-like experience is we're going to add memory to our chain. Now, we're not going to talk too much specifically about this, but the idea is that we can provide both a conversation buffer memory, which is like what's happened in the conversation so far memory, as well as read only shared memory, which is this idea that some tools can get access to the memory, but they can't actually change it, right? So conversation buffer memory can be modified. This read only shared memory cannot. So this is useful for tools that should be able to leverage memory, but shouldn't be able to commit anything to memory. In order to add that to our chain, all we have to do is include it. Memory equals read-only memory. You'll love to see it. We're going to set up a couple of tools. You can think of tools as an extension of chains that can be leveraged by our agent that sits on top and gets to see the descriptions and choose which tool is right for which job. In this case, we have our main QA system, which is our index powered QA retrieval chain. And then we have a backup, which is Googling, right? So obviously this is not going to be every case. You're not going to just be able to Google. But if you can, this is an example of that. We create the actual agent. This is just showing you how you can go through this. We don't have to focus on much here, except that we have full control over how the agent acts, what it's supposed to do. We have the ability to give it this chat history so that it can make decisions based on its memory. We have the ability to ask it questions, which are input. And then we have the ability to let it quote unquote think. You can leverage that thinking a little bit more complexly, but for right now, we're just going to let it do what it needs to. We create our zero-shot agent with our tools and then with our prompt templates, and then we provide it what the inputs should be. We create our chat model. This is going to be our LLM chain that kind of powers that agent. And then we set up our zero shot agent with that LLM chain. This is the same LLM chain that we had before with our chat model and our prompt, our prompt being the agent prompt. And then we make an agent executor? Basically all this is is it's making something that's allowed to call other tools, call other chains, and then use those outputs to strategize or come up with a more clear or concise answer. Once we have all of this set up, we're able to ask it the question, what is the deal with the Cheshire Cat? We can see that it enters the new chain. It has this thought. This is the agent executor. It has this thought, and it decides it needs to use the Alice in Wonderland QA system, which is our main retrieval QA system. We're going to ask that system, what is the deal with the Cheshire Cat? It returns a response, and our executor makes a observation, which is that Cheshire Cat. It returns a response and our executor makes an observation, which is that Cheshire Cat is a character in Alice in Wonderland. It gives an answer. Then the thought of the agent is that it knows the final answer. And so it could just give us that when we see it here. The Cheshire Cat is a mischievous and enigmatic character in Alice's Adventures. It is known for its distinctive grin. I lost my scroll bar. I'm so sorry. see it here the cheshire cat is a mischievous and enigmatic character in alice alice's adventures is known for its distinctive grin i lost my scroll i'm so sorry for its instinctive grin and ability to disappear and reappear at will so then i asked the question well what makes it enigmatic and this is where that chad gpt experience comes in right we don't have to ask what makes the cheshire cat enigmatic we ask what makes it enigmatic and our agent knows right i'm not sure about the specific details that makeeshire cat enigmatic. We ask what makes it enigmatic. And our agent knows, right? I'm not sure about the specific details that make the Cheshire cat enigmatic. So it asks, again, the QA index, and it gets a response. The response is the Cheshire cat is enigmatic because of its ability to disappear, its riddles, and its knowledge of Wonderland. So then I ask, well, what are some of those riddles? The Cheshire cat, you know, he has riddles, but we want to know that we ask the context in the context doesn't find it. This is likely because riddles isn't present. He probably doesn't say I'm gonna ask you a riddle now, Alice, right? He just asks a riddle. So we have to go to our fallback tool, which is the Google search, we can see that the agent doesn't find any riddles in the qa retrieval train. And so it goes to the backup and it Googles what the riddle is. And we get the example of a riddle, which is what road do I take? Now, again, what is Alice's response to that riddle? We're not having to provide that context because of the memory. We're in an ongoing conversation here, right? So in this case, because it knows what that riddle is, it's what road do I take? We can actually see the agent executor decide to ask our QA retrieval chain, what is Alice's response to the Cheshire Cat's riddle? What road do I take? And because it's provided the riddle itself, we get the context and the context provides us the answer, which is I don't care. I don't much care where, so long as I get somewhere, which is Alice's response to the Cheshire Cats riddle. So with memory, with the agent executor, with the tools, we're able to have a seamless chat experience, just as you expect from GPT 3.5 that includes our context that we've given this particular agent through the QA retrieval chain. And with that, I'm going to pass it back to Greg to wrap us up. Boom. That was awesome, Chris. That was so cool to see how exactly the chatbot is working behind the scenes, what it's thinking thinking how it's making decisions um really just a lot to take in there as soon as we start playing around with agents um really enjoyed you walking us through that from today's event you know we really hope that we showed you that the potential for building truly complex applications is there, you know, really emergent applications. It's all there. It's just like, what are you going to build? Single document QA is really a fantastic entry point for anybody getting started with this tech. It is going to teach you about the foundational constructs within the link chain framework, and it's going to allow you to get that chat gpt like experience with your data doing simple queries although if you want to take it to the next level after you master those basics and you want to get that true chat bot chat gpt experience you really can't beat adding agents adding additional ways of finding information, and really just getting creative with what exactly the pieces are that you're chaining together. So, you know, thank you so much for, you know, Chris, for showing us that. Thank you everybody for joining us. We've got the Google Colab links. We're going to share the slides. The chainlit demo that you saw today as well in the beginning, we're going to share that. Reach out to us directly with any questions that we don't answer today. Greg at mlmaker.space or Chris at mlmaker.space. And with that, I'd like Chris, join me back on stage as we go through the questions of the day. So if you have additional questions, please go ahead and add them to the Slido. We've only got a few going today. So if there are more questions, we've got time for them. Otherwise, we'll go ahead and break early. All right. So first off, love this. Thank you, Anonymous. However, am I correct in assuming there's no way to, quote, ground these models without sending data to the OpenAI endpoint? I'm concerned about PII, et cetera. Yeah. I mean, so you can, luckily. Open source models generally are performant enough now that you can use them to power this, which means you can run them on-prem. You can also use your own proprietary models. You can also use Azure services to provide you like a closed source OpenAI instance in which you don't have to expose any PII. So there are many different ways that you can kind of control the PII flow that doesn't expose it to any public endpoints or semi-closed endpoints. So you can ensure that your customer's information is kept private and never leaves whatever, you know, contract contract data contracts you have it never leaves that scope so um you you are unfortunately incorrect but also fortunately incorrect right so uh you don't have to use uh open ai but for this example specifically we are leveraging it yeah we're hearing that more and more aren't we people trying to move away from open ai but um it's a great teaching tool i feel like we find uh it's a great entry point play around with some open source data with it um but yeah if you're using your own data certainly uh might be worth looking at some other things um okay so uh from not a number we have if raw tabular data with labels is presented as a as a text document and QA is like quote predict new label for row where row has new feature values would this work sure I uh why not uh yeah I mean you can make it work right so we can format the response from our LLM through Langchain using format or response formatting tools that are provided within the Langchain framework. So you can format it right back into the row if you want. There is a lot of research and papers that indicate that the LLM can be used in such a way. So to predict like what the next thing is going to be, it's obviously not going to be without some serious modification or introducing new tools. It's not going to be like great necessarily, but if you've built say a custom LLM that does this, right? And you hook it up to Langchain, it can absolutely be leveraged as a tool in that the agent could use right so you can put it into that flow uh but i would say it depends on what performance you need or what you're using it for but you can do that yeah absolutely yeah right like almost like if you can dream it you can do it with this stuff right right? I mean, it's generic as it gets. So the next question is like the question that we keep getting like everywhere we go, Chris, like what open source models do you recommend for on-prem, et cetera, et cetera? Yeah, I don't. Unfortunately, and fortunately, the models do as it exists right now for lightweight LLMs, or open source LLMs, or however you want to define what you're meaning by open source here. It they're all specialized in some way at this point, or can be specialized. And so I think that when it comes to different tasks or different part of the agent chain, there's different responses to this. A more general, well-performing Instructune model is likely going to be your best bet for like the agent, right? Because that has to be a general, it has to have general knowledge and ensure certain response response format and then within each of the tools you know there's models that are better at that kind of open QA right uh or close QA I mean it's it's totally dependent I would say though if I have to give an answer start with something like open LMS open llama or Falcon 7B these models are doing well enough to kind of fit wherever you want in the stack. You do have to keep in mind that it's going to error sometimes, right? So this is a, not a deterministic task. Sometimes it can give a response that doesn't make sense. And so you're gonna have to build some custom error handling or parsing in order to kind of get over that hurdle that you might not have to if you use a much bigger model, right? So Falcon 40B or OpenAI endpoint. But the idea is if you need an answer to just get started, Falcon 7B or OpenLM's OpenLama are a great place to start, as well asify's new model is also fantastic if you were looking to uh to use that great insights and and i think the other layer of this is commercial availability versus no commercial availability so definitely check the license on the models that you're looking at if you're just playing around and learning it doesn't really matter but as you start to do things for your company for your business this is another key question that you're looking at. If you're just playing around and learning, it doesn't really matter. But as you start to do things for your company, for your business, this is another key question that we're getting all the time. So that's definitely a deep dive for another day, but a definite emerging space there, which model for which application, which size, and which commercial availability, how much privacy, all of these things are really interesting open questions. All right, so from anonymous, can we ask this bot to summarize the information from our knowledge base in a particular way? Like, can we set the system persona, for example, as a product manager? Yes. Yeah. You sure can. I mean, it's not an exciting answer. Yes. Yeah. Yeah. I'm kind of thinking like, how would you train it to be more like your product managers? I don't know. You can kind of fall down the rabbit hole a little bit there, but yeah, you could certainly just kind of in that system message that we saw within the chat model, kind of just, you are a product manager and start there. Also I did misspeak, I'm very sorry. Not Shopify, Salesforce, the new Salesforce. Yes, okay. Good call, good call. Okay, tactical question. How do you control the size of the tokens sent to the model by Langchain? There's a max tokens parameter that the OpenAI endpoint accepts. And so you can set it. You can also limit it on your side using a number of Langchain. So like that max tokens is for the response by the model. If you're wanting to limit what's sent to the model, you can also just use lane chain to that. There are parameters that you can set that help you to do that. You can build custom functions to help you organize or determine how much tokens you're willing to send. All of those things can be explored by lane chain on either side of the LLM chain. Yeah. As a follow-up to that, great presentation. How would you go about utilizing Langchain to create an application to generate docs that are three plus pages long for commercial use? Yeah. At that point, we have of think about what are LLMs good at. We have to think about what's the best tool for the job. If you're wanting to do it all in one shot with very good context adherence, you're looking at some of the larger context model, context window models. This is something like Anthropix, Mpt or even now open AI has like up to 16 K tokens for chat GPT 3.5 Turbo so there's a number of considerations to make another way is to break it down into parts only do one kind of you know part at a time so paragraph by paragraph I mean there's a number of ways you can approach this, but ultimately what it comes down to is context window and how much do you need persistent context through the entire three plus page long document. I would stick with your kind of clods and your big context window GPT-4s if you really wanted that to shine because they are the best at those long contexts right now. Yeah. And finally, we have our last question from Deepak. Other than the Langchain website, is there any other website you recommend to learn Langchain? Also, in your experience, what is the best way to make retrievers? And I guess I'll just throw out the deep learning AI course that they recently put up with Harrison is a great place to start. Gives you some insight, some overview. I'd recommend taking the prompt, the chat GBT prompt engineering for developers first to kind of get the vibe and feel before you head into that. But you could have those two done by the end of the day today they're just a couple hours each maybe one to two hours um any other thoughts on that Chris and then on retrievers I gotta I gotta do my catchphrase thing here whatever so I would recommend just building with the tool to learn land chain just build stuff with it like you're gonna find cool stuff in courses and like you're gonna find cool stuff on websites but if you don't have a reason that you that you're building or you don't have like a thing you're excited to build you just like they're not gonna stick like they would if you're trying to solve a problem i would also say like really don't worry about the fact that you're going to have over solved a lot of the problems like this is potentially overkill for a lot of tasks uh right you could you could do these with much simpler models and everything like that but just use the tool to solve a problem um you know even if it's over engineering like it gets you used to the tool gets you in it and then the best way to make retrievers is there is some trial and error involved, really understanding your data, understanding how to chunk it, understanding what potential context you could be losing, and, you know, setting those overlaps, setting the kind of chain you're going to use and the kind of retrieval you're going to use, right? So like, we have normal cosine similarity, but maybe MMR is better, because that takes into account what we've already retrieved. So we're expanding our potential context space. There's a lot of little fiddly knobs you can use, but I would say the best way is to really understand your data and then leverage that information you have to set the correct parameters and use the correct chains. set the correct parameters and use the correct chains. In other words, build, build, build. Okay, awesome, Chris. That brings us to the end of today's event. Thank you everyone for your participation. This has been brought to you today by the Machine Learning Makerspace. Our community is just getting started and we're so grateful that you joined us for today's event. We're also excited to announce that our first ever four week live course, which is going to be on LLM Ops, LLMs in production, is going to be offered starting August 14th. That's going to be the kickoff date. So in the follow up email that you receive after today's event, we'll share not only a post-event survey, but all the details on our upcoming LLM Ops course. Chris is also going to put together a long-form video explanation of that multi-document QA chatbot with agent memory and Google search, and we'll be sure that you receive that as well. For everything else, please follow Machine Learning Makerspace on Twitter, on LinkedIn and YouTube, at ML Makerspace on Twitter, Machine Learning Makerspace on LinkedIn and YouTube to stay up to date on everything that comes next. Until then, we'll keep building, shipping and sharing. We hope to see you do the same. Until next time, everybody. Later.
LangChain: Build ChatGPT for Your Data
3,395
AI Makerspace
20230706
GPT-4 Event Summary: Dive into the Future of AI with Our LangChain Workshop: Build Your Own ChatGPT! Discover the cutting-edge Large Language Model Operations (LLM Ops) and master LangChain to create sophisticated LLM applications. This interactive session will unveil the secrets of using chains to integrate prompts with LLMs and vector databases. Learn to craft advanced QA chatbots using just PDF documents, with all demo code available via GitHub. Ideal for anyone eager to harness LangChain for personalized LLM applications or delve into the world of LLM Ops. Don't miss this opportunity to shape the future of AI! Event Page: https://lu.ma/r3q7ndj4 Have a question for a speaker? Drop them here: https://app.sli.do/event/bbCdYRphpHe9KTjfz8S4Ek Speakers: Dr. Greg Loughnane, Founder & CEO, AI Makerspace https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO, AI Makerspace https://www.linkedin.com/in/csalexiuk/ Apply to our upcoming LLM Ops: LLMs in production course on Maven today! https://maven.com/aimakerspace/llmops How'd we do? Share your feedback and suggestions for future events. https://forms.gle/WgviJKqdL9ZsqnFEA
2024-06-13T21:52:59.391396
https://www.youtube.com/live/anIBtQNn1G0?si=cH00wulPXAEIo-Es
Hey, Prompt, what would you say is the open source edge of large language modeling today? Well, Dr. Gregg, I would probably say it's got to be Lama 3. Hmm. And Wiz, what would you say is the open source edge of language? What would you say is the open source edge of language? I'm probably going to say Gen Z slang kind of conversation. The young kids got us covered, right? And that new, new LLM. Well, what do you guys say we combine these two edges of language and language modeling today? I mean, we got to link up, vibe well, and make a dope team today on Llama 3 and Hugging Face. Right, boys? Sound like a plan? For real. Come in, Greg. Prompt Engineering, Wiz. Today, we build, ship, and share together. We'll see you boys back in just a minute. My name is Dr. Greg. That was the Wiz. And our new friend, yet to be revealed, Prompt Engineering is joining us today. You may know him from YouTube or his other work on open source projects like local GPT. Today, we talk about how to do end-to-end prototyping with Lama 3 based on how to actually understand Gen Z language. We'll walk you through everything you need from data to fine-tuning to deploying a scalable endpoint to putting a UI and serving it up so that others can use it publicly. We're going to discuss some of the intricacies along the way, and we'll have a lot of fun exploring Gen Z philosophy as well. If you have questions along the way, whether for us or for Prompt Engineering, please drop them in the Slido link, or alternatively, go ahead and smash the chat. We will be getting a little bit more interactive today as well as we try to build something truly scalable we want you guys to help stress test it so let's go ahead and jump right in to end to end prototyping with hugging face and llama 3 brought to you by ai makerspace and prompt engineering as i mentioned as we align our aim for the day we want to build ship and share something that's really end to end. We want to do it with a scalable endpoint. We want to understand what that means exactly and how we're sort of hitting this nice in-between space when we leverage Hugging Face directly to do this. We want to also take a look at fine-tuning Lama 3 and understand how, in general, we can kind of leverage these open-source models through endpoints directly through Hugging Face. We want to sort of put this in context as to, you know, the question everybody has, but why Hugging Face versus AWS or Azure or GCP? Shouldn't I be using one of those instead? It's a great question and it's worth some discussion today. So what we'll do is we'll walk through end-to-end prototyping, fine tuning. The focus is gonna be on this scalable end points piece today. We'll show you how to put a UI on it, deploy it, and finally we'll do some Q&A. So we're going to build an end-to-end prototype today. And we're going to try to see if we can't really build an AI that What's going on at the open source edge of language? Oof, for instance. Definition, used to express discomfort, surprise, dismay, or sympathy for someone else's pain. The sound oof has been used when a player dies in a video game since the early 2000s. Gen Z, oof has been used when a player dies in a video game since the early 2000s. Gen Z, oof. You've probably heard it if you've got a teenager at home. But how can we learn maybe a little bit deeper about how to leverage this type of language and how can we use AI to help us. Take Plato, for instance. Courage is knowing what not to fear. Say what? It's all about being aware of the things that could go down, but not letting that stop you from living your life. It's like, go down but not letting that stop you from living your life it's like you know what could happen but you're not gonna let that fear hold you back from chasing your dreams and living your best life you're gonna face it head on you're gonna come out on top that's what courage is all about, bro. Gen Z has so much to teach us. So much. And we can see how we can leverage AI to do this and to translate for us with our application today. Let's go ahead and prompt it with our friend, Prompt Engineering. Show us how it works, man. All right. This is what we're going to be building today. It's going to be a scalable front end, and let's prompt it. So I'm going to be using a quote from Aristotle. Excellence is never an accident. It's always the result of high intention, sincere efforts, and intelligent execution. It represents the wise choices of many alternatives. Choice, not chance, determines your destiny. So let's see what Gen Z is going to say about this. All right, so endpoint is running. I'm going to keep it real. Excellence ain't no coincidence, fam. It's always the outcome of high key intentions, grinding hard and making smart moves. It's a smart choice between many options. Not just luck that determines your path in life between many options, not just luck, that determines your path in life. All right, back to you, Greg. Oh, dropping the wisdom, fam. What I want to do is I want to ask everybody in the audience now, because as Prompt mentioned, this is a truly scalable endpoint. Isn't that right, Prompt? Yep, yep. Let's test it out let's test it out man let's test it out everybody if you can go ahead and we're gonna drop the end point and the space we're gonna drop the space into the chat go ahead and smash that end point thanks so much prompt we'll be back to you in just a little bit. And don't forget, smash that endpoint, everybody. Smash it. Let's see how many requests we can get this thing. See if you can overload it. I bet you can't. Write a script to see if you can overload it. We don't care. Let's see how much action we can get on this thing. In the meantime, let's talk a little bit about prototyping and put this all in context. When we prototype, we're generally going through a few stages. We're generally going through prompt engineering. How about that? Then we're often moving into RAG and doing RAG straight away before looking at fine tuning and ultimately trying to think about some sort of agentic reasoning behavior. And the way that looks is sort of along two axes is you can take prompt engineering and you can try to optimize what you're putting into the context window by doing RAG you can also take prompt engineering and sort of optimize the way the LLM is behaving the input and output you're getting from the LLM by doing fine tuning this is sort of moving the prompt into the model a little bit and kind of doing some rewiring based on many examples that you might put in a prompt in a one-shot two-shot few-shot sense generally you need both of these things potentially fine-tuning both an embedding model and a chat model for building more complex systems and of course you can use agentic reasoning for all of these things as well what we're going to do today in the interest of time and doing something that's truly end-to-end is we're going to just focus on fine-tuning. And the reason for that is because it's just a little bit too much to try to do RAG and fine-tuning and really focus in on endpoints today. So we're going to go through our process aligned with our ethos here at AI Makerspace of building, shipping, and sharing. Our build is going to be about curating data, in fact, creating from some of the data that we curated a data set that's going to serve our purposes. Then we're gonna train the latest model. We're going to ship, we're gonna actually deploy this thing to an end point. We're gonna serve it up so inference can be done on it at scale. And then we want this thing to be shareable to others, not just locally, but publicly. So we want to put a UI on it and make it public. As we think about building, what are we doing? Well, we want the data and the model to really play off of one another. And of course, the model doesn't really care what the data is, Of course, the model doesn't really care what the data is, but the data really does matter. We're going to use data sets and models on Hugging Face to push these things up to the hub. If you've been watching our channel so far, you've probably seen this before. We're going to give about 10 minutes to fine-tuning LLMA3 today to show you how this is done. When it comes to shipping, we're going to deploy our fine-tuned LLM model to an endpoint. We could also do this for embedding models if we were building, let's say, a RAG system, but not today in the interest of time, once again. But not today in the interest of time, once again. And as we think about sharing, the UI is oh so important. It's that chat GPT style chat interface that people want. And that you want to give your users and stakeholders. So there's a really easy way to do it. We're going to show you how to do a chainlet today. And also when it comes to deployment for a public URL, so easy to use Hugging Face Spaces to do this. One click deploy to a container and you're on your way. When we talk about kind of the LLM stack and all the things that are part of it, there's a lot of stuff on this slide. And the text is really small. part of it. There's a lot of stuff on this slide and the text is really small, but we can kind of walk through at a high level the fact that we need to do like data pre-processing and embedding. This is particularly important in RAG systems. We need to do prompt construction, retrieval, optimization, and that's not even to mention everything that's really kind of truly on the back end of the execution and inference and the serving up of the models that needs to happen. The reason I wanted to show you this diagram today is to show you just how simple the application is that we're making today. Of all of these pieces in the emerging LLM app stack, we're only going to use a couple of them. We're going to use open source models deployed directly through Hugging Face, actually via AWS SageMaker, but through Hugging Face nonetheless. We're going to use the OpenAI Python software development kit to actually glue this stuff together in the middle. We're going to use the OpenAI Python software development kit to actually glue this stuff together in the middle. We could use langchain, we could just straight build this thing out. A number of ways. The reason we're using the OpenAI SDK is because it works really well and it's pretty easy to implement and it's overall a good choice. pretty easy to implement and it's overall a good choice. And of course, Chainlit allows us to not only take input from the user, but as we saw, give output to the user. And with that, we're ready to get into the fine tuning piece. Remember, fine tuning is about moving from prompt engineering into a place where you have more examples. You're optimizing the LLM's behavior. You're telling the LLM how it should act. Now, there's three types of fine tuning that we generally are going to be leveraging. These are not mutually exclusive, but they all matter. Training the behavior for a particular kind of response. That's not really what we're doing today. Constraining the input output schema. We're not really doing that today. We're doing that kind of a little bit. We don't want it to be extremely verbose coming out, but most of all, we're doing a little bit of language training. We're training the model to interpret new words much better. And this language training is the focus of today. If you want to know more about the other types of fine tuning, check out the practical fine tuning event that we did not too long ago. When it comes to tactical, what we're doing, we're doing something called PEFT, quantized LoRa fine tuning, parameter efficient fine tuning using a quantized low rank adaptation approach. All that is to say, we're modifying that behavior by updating not all the parameters, but the parameters that we need to update in the attention layer based on the LoRa approach. And we're using a quantized representation of those parameters, those weights to do so. Now, if you want to know more about PEFT, you want to know more about LoRa, you want to know more about quantization, check out a couple of events that we recently did on PEFT, LoRa, and QLoRa. What we did today is we went and we took the Gen Z dataset, commonly used Gen Z slang, and we combined that with the power of GPT-4. So we gave it sort of a dictionary to provide ourselves with the sort of input-output pairs that were more of translations, actually, than question-answers in this case. And we used that to fine tune the LLM. Of course, the model that we fine tuned used these pairs and we leveraged off the shelf, Lama3, 8 billion, Instructuned because of course, Instruct tuned is always the move when you're pulling open source off the shelf. So with that, let's check out exactly how fine tuning was done, how we push data and model to the hub and how we use them. Wiz, show us how it's done, man. Oh, yeah. Okay. So as Greg said, basically we started with a list of data or terms, right? And then we turn that list of terms into this very basic training set using, you know, GBT 3.5 Turbo, sorry, GBT 4. And so we get things like, it seems like he's not interested in maintaining a serious relationship. Looks like he's not about cuffing for real. He just wants to keep it casual. This is the idea, right? We built a data set that we could use to fine tune our model. Then we fine tune our model. So this is the notebook that we use to do this. That's going to be shared with you guys in the chat. And the idea is it's pretty much what you're used to, right? So we're going to use transformers, TRL, as well as some supplementary libraries to train this. We're going to use this data set that we generated. You can see that it is posted on the Hugging Face hub. If for whatever reason you wanted a Gen Z English data set, feel free to go for it. So we load that up. We've got 105 rows, which is very little data. the English data set, you know, feel free to go for it. So we load that up. We've got 105 rows, which is very little data. And then we're going to use, you know, see that it's got the English sentence and it's got the Gen Z translation. Then we're going to create a prompt. The prompt is basically going to be, this is the Lama 3 instruction template prompt. The prompt is basically going to be, this is the Lama3 instruction template prompt. So we have our system message, which is contained in the system header. We're going to have the system message just be GenZify. And then we're going to end our system header and we're going to start our user header. And then we're going to pass in the natural language. And then we're going to close our user header. And we're going to start our assistant header. And then we're going to expect there to be some kind of translation. And so this is what we see. When we build this prompt, you'll notice we just, you know, take away the actual natural language. Same with the response. We create a create instruction helper function, which is going to turn rows of our data set into this, into the format we expect, which is this prompt. And we can see that in action here. When we create an instruction, we get our beginning of text start header for system, gen ZFI, start header for user. That was really funny. That's a natural language description, right? Or a natural language sentence. And then start header for assistant. I'm weak. End of text. There you go. Now we're going to load our model. Like Greg said, we're going to load this in four-bit quantization. So the idea is that we have this set up. And the basic idea, I'm just going to go ahead and make sure that this is shareable for you guys, and then I'll drop it in the comments. But the basic idea is what we do is log into Hugging Face. We need to do this because Llama 3 is a gated model. So what that means is that you can't use it if you don't basically ask meta. So you will have to log into this notebook and you will have to request access from Huggyface. There are other models available by certain, say, new research teams that will don't require the gating. We also have the ability to basically load this in 4-bit i'm not going to go into that here as greg said uh there you go and then we're going to set up our tokenizer we have to do this uh tokenizer swap because we're going to fine tune uh just part of the part of the thing so we we're going to add a padding token we're going to set as our end of sequence token, which is not typically something that we would want to do, but in this specific use case, because we're going to be using this padding technique later, we can get away with it. Again, we go into more detail in other events about that. We're then going to see, like, how does the base model perform, right? So we say things like they're in a very complicated romantic relationship, and the model says the drama. So they're in a complicated, huh? That's totally relatable. The gen-C-ification of relationships is all about the messy. And it's not quite what we want, right? So we can say, you know, things like she looks very attractive. And we get a response like you're saying she's got that gen-Z-ified glow going on, which might be gen-C language, but I feel like they're not that self-referential. So the model's like kind of on the line here. It's not doing a bad job. It's not doing a great job. Okay, okay, okay. But, you know, we want to make it better. So what are we going to do? Well, we're going to use LoRa. Again, go into that more in another event. And then we're going to finally do our fine tuning. Got a lot of text here that walks you through what the hyperparameters are doing, why we've chosen some of them. And the basic idea here is that we're going to train this thing for 10 epochs, which is going to probably overfit, right? So it's going to be pretty well overfit. But that's okay. You know, we're okay with that. This is just a translation task. We're going to do this low batch size. This is going to keep our GPU RAM pretty low. The reason it's so high is because we've actually loaded the model again, but you can actually get away with training this thing on a T4 instance, which is the free instance, which is pretty good. Then we're going to go ahead and we're going to set up our TRL trainer, which is our SFT trainer, Supervised Fine Tuning Trainer. We're going to finally train this model. 10 epochs, right? So we're overfitting for sure. We can see that our training loss gets very low. We're definitely overfitting, but it's fine. And then we can save this model. We're going to free up some memory so we can reload our model. This is going to help you be able to do this in the T4 instance. And then we're going to go ahead and we're going to push this to the hub at this address, right? So that we can load it on our inference endpoint later, which Prompt is going to take you through. The idea here is that we want to be able to load this model as an inference endpoint, so it has to exist on the hub in order to be able to do that, at least very quickly. You can certainly find more hoops to jump through, but the quick integration is available that way. And then of course, we're gonna go ahead and we're going to add that translation tokenizer as well. So if the tokenizer is not present in your hub space, it will not, the inference endpoint just won't work. Okay. Then we're going to load this into another pipeline, this time with our merged model, which is our fine-tuned model. And we're going to see how it went. We can say things like the greatest wealth is to live with little right and we get the response the most valuable flex is being low-key rich in your own mind much better than original right like definitely i'm not a gen z scholar of language right but this is this feels more succinct at least and it's doing what we want it to do and then we can say this this next one i was born not knowing and have had only a little time to change that here and there. And we get the classic. I was a noob and I've only had a hot sec to upgrade that. Absolutely fantastic. And that's all we have to do. So now that we fine tuned our model and we have that fine tuned model on the hub and we've kind of verified that at least for these cases, it works. And keep in mind that these are outside of the training set, right? We didn't train it on anything like this. These are just quotes from random philosophers and physicists. So, you know, it seems to be doing well. So we can move on to the next steps, for which I will pass you back to Greg. Yes, nailing it. Okay, learning so much here. Let's talk about the end points, okay? Now, just to contextualize, let's make sure that we're all clear on the language here. We want to be serving models so that we can do inference. All right. And the idea of Hugging Face Inference Endpoints is that it solves the problem that you need a GPU to run inference. And this makes it really easy to scale this out. Of course, you can host a single application on a single GPU on Hugging Face relatively easily. easily. What you can do with this inference endpoints is you can sort of have a little bit more control over exactly what's going on with the GPUs needed to run inference. And this idea of control is going to be a central theme of our discussion today. The way to think about the solution that Hugging Face Inference Endpoints provides is it's an easy way. I have a model. Now I have a model that's on Endpoints with a GPU. And I kind of control it. And so you can run this in production environments, like as an enterprise solution, it's possible. Now, should you? Well, it depends. Now, how's this happening? How is it happening? Well, it's happening because under the hood, we're using TGI, text generation inference from Hugging Face.. Now we don't have time to go into the math or details of TGI. If you want to see that at some point, let us know in the feedback and in the chat that we should do an event on it. But TGI is basically allowing us to do inference fast. So many tokens, really fast, basically allowing us to do inference fast. So many tokens, really fast. And there are lots of tricks that it employs to do this. Again, we're not going to talk to you about the details of these tricks for speeding up inference, but just know that it works dang well. I want to bring Prompt and the Wiz up to the stage here to ask like the question first, to ask like the question first, for a little discussion, Sash here, guys. A lot of people feel like if you're not using AWS, GCP, or Azure, that you're not actually building production LLM applications. First of all, is this true? What's your opinion prompt? Definitely not. There are a whole bunch of other options. And actually like if you're building on AWS or GCP, you will need to set up a whole infrastructure, right? But there are better options which will take care of the infrastructure part for you. Okay. Okay. So when you think of, you know, other options, what do you think are the reasons why people might choose something that's not AWS or GCP or Azure? So imagine you are a single person startup. Okay. You don't have a dedicated infrastructure team and you want to deploy LLMs at scale, right? So you can use these TGI inference from Hugging Face, for example, right? Hugging Face is one of them. In that case, Hugging Face is taking care of all the infrastructure for you. You just need to deploy an endpoint with the dedicated resources and that's it. Wiz, if you were a one person startup, would you use Hugging Face inference endpoints? I would use anything that makes my life easier, which does include inference endpoints. Yeah. The idea is when you're starting and you don't have a team and the had to then go and figure out how to either just use hugging face inference endpoints or i had to go like figure out how to use an entire cloud computing platform it's like oh man i'm taking hug and face inference endpoints all day now if i had the wiz or prompt engineering on my team, I might consider doing something a little more sophisticated. And this sophistication, I think is the next key point guys. I think this idea of control that we brought up earlier is very important here. I wonder if you can comment on this prompt. You know, my understanding, and maybe you can correct me or verify this, is that if you use a cloud computing platform, you have lots and lots and lots of control, like let's say all the control. But if you use something that is a little bit more managed, that is a little bit easier, that is a little bit less things you can change, like hugging face inference endpoints, that you're sort of just making this trade-off. Is that right? Is that the right way to think about it, Prompt? Yeah, I totally agree. I totally agree, right? Like in the beginning, if you're just applying something, you don't really want to worry about a lot of optimization. Once you have something up and running, yeah, you definitely want to move to something where you have a lot more control, do all the optimization that you need, right? So that makes sense. Yeah. Yeah. And Wiz, if you were going to optimize something after you had it kind of deployed, what would you be looking to even optimize? I don't even exactly understand this from a non-scale engineering perspective. There's a lot of costs that you incur by using these kinds of managed services, especially kind of these ones that are intended to be more simple. You know, it's always on, it's always available. The actual replica scaling is not that great. You don't have any real decision-making power over how many instances you have or how many you need. I would say, ultimately, you're missing a lot of that fine-tuned control that can really help you optimize cost. So it's going to be high availability. You're going to be able to make sure that it stays up most of the time. So you'll get all of that greatness, but it's going to cost a lot. You're not going to have a lot of fine grain control over reducing that cost as you begin to learn the habits of your customers. And as you begin to use, learn the habits of your business, you're just, you're going to be missing out. Okay. So the idea is you'd want to get something up and running. You'd want to take a look at it. You'd want to then assess like what's going on with this data, see how people are using it. And then as you optimize, you're really optimizing to serve up the model at a speed and at a cost to you that makes sense for your business. That's really what we're talking about at the end of the day, right? Yeah. The idea is like you have more levers, right? If you have more levers, you can get closer to your desired outcome than if you have fewer levers, right? So it's, yeah. Yeah. I think an analogy would be look at no-code tools versus writing your own code. In no-code tools, you are limited, but it's a lot easier to use in the beginning. And if you write your own code, that will give you a lot more control. Yeah. You know, Pram, if somebody's starting out there i mean would you generally recommend that people start with no code low code tools from the jump or would you you like, would it be obvious to people out there? Like, if they were going to use high code tools, that that's a good idea. I think a lot of people have this sort of fear of like, I don't know how to code, I can't build it, or I don't know how to code enough. So I need to keep learning how to code more before I can actually build the thing like what's your what's your advice to folks that you know don't really want to pick up a no code tool but you know potentially might be able to benefit and do things a little bit faster even if it's a little more expensive and not as not as cool necessarily in the meantime. Yeah, like the way I think about it is if you're not a coder and you need or you want to ship fast, right? It's actually really good to adopt these no-code tools in the beginning, right? You ship a product or idea and you validate it, right? And then comes the second part of optimizing it, right? So for that, you definitely want to, let's say, either learn yourself or hire engineers who can actually do it for you. Yeah, yeah. Yeah. And I guess that's sort of the way that we'll kind of wrap this little discussion up is that's kind of the best way from talking to you guys that I've come to think about this is we're almost kind of hiring Hugging Face as a partial engineer for us a little bit here. And we're sort of saying, we're going to give you a little bit extra money, not as much money as I would need to spend necessarily to hire an engineer to manage AWS for me, but I'm going to kind of hire you as a bit of an extension of my team to manage this a little bit more for me. But I am still going to be able to scale this out if I can find that product market fit, really with the enterprise solution as much as I want. And that's hugging face in print in front of 10 points in a nutshell, as far as I understand it. Do you have anything to add to that whiz? Absolutely not. I think that's, you guys covered it so well. At the beginning, right, you have money and no time. And at the end, you have money and time, hopefully, if you're doing it right. And when you don't have time, no low code is a multiplier. So, yeah. Awesome. Awesome. Well, we're going end to end today so let's get right back into the end to end action and before we go to our next piece of this we're going to see how we can ship now that we've built and for that we go to prompt engineering shows how to set this up now man all right so assuming everything goes well with the training and you are able to push your model to help so you can see something like this so this is the second version of the model that was trained. So I'm going to show you how to do the deployment. So this is a public model, so you can actually access it. And in order to deploy it, there are two ways. Either you go here and select Infos Endpoints. This will basically take you to a new endpoint that we where you can create click on uh new that will start the process of creating a new endpoint right so we actually need to look for uh that model that was trained you can look it up here or if you have access to the model that you just uh deployed you can click here on deploy, click on inference endpoints, right, and it will automatically populate the model repo for you. Okay, next, you need to provide the endpoint name. You can call it whatever you want. I will just stick to the default name that HuggingFace selected for me. All right, now this is a big model, you can call it whatever you want i will just stick to the default name that hugging face selected for me all right now this is a big model so we will need a powerful gpu to run this the good thing about uh the inference endpoint from hugging faces it actually gives you a suggestion or recommendation uh which gpu to use depending on the model that you selected and there are multiple providers so we're going to stick to aws we'll select gpu and we're going to go with the a10 gpu uh there's an option of automatic scale to zero so let's say uh there is no traffic on uh your endpoint for a while so you can actually select it uh to go to or go to basically sleep right so if I say after 15 minutes with no activity then it's going to go to sleep that will save you some money then you can also include options like protected so in this case in order to access this your user will need a hugging face token or you can make it public right we'll just keep it protected now you want to also look at some other options for example if you do auto scaling then you need to provide the maximum number of uh replicas that you want and the uh minimum number so the minimum minimum is one and depending on your needs you can auto scale it to whatever you want the task is going to be text generation we're going to keep everything else uh the same the only uh thing that we're going to be updating is the quantization so we're going to use bits and bytes to run a quantized version of the model rather than the full precision model and that will help us uh basically increase the entrance speed as well okay so once you go through all these options just make sure to select like the task is text generation right and select the appropriate quantization we did use the lot of doctors so that's why we're using the bits and bytes for quantization and uh you just hit create and this will start the endpoint creation process now this is going to take a few minutes so just bear with it and everything if goes well you will be provided with endpoint url here that you can use in your code so let me show you how you can do that within Python. Okay, so here is a notebook that we're going to be using. So first you need to provide your Hugging Face token because we created that as a protected endpoint. Okay the this is the url that you will get once the deployment is complete we are going to be doing async calls right so we have this very long text that is going to go as a query or prompt okay Then this function is the one that is actually making the API calls. So you provide your API URL along with the query that you're going to be passing on. Then there is this helpful function, which is going to make 500 calls to the same API endpoint. So we really want to stress test this and show you that it's actually able to make a lot of calls to this scalable endpoint. And it will hold. So that's why we're using this, right. And we wrap around everything within this async main function. So when we run this, it will start creating or start making a whole bunch of calls to the endpoint and we're going to get responses. All right. So this is how you quickly deploy and test the model. I will pass it on to the Wiz. Yes. Okay. So now that we've done this, let's take a peek at something that is pretty cool. So we can actually see our dashboard here. Now, if you notice this dashboard, right, you'll see that there's, there's been a lot of activity, you know, since we've started the endpoint over the last 30 minutes. So you can see here that we have the testing that we did where our requests was getting very high. And this is basically just showing the model chugging through all those requests. The basic idea is, you know, I'll extend this back out to the last three hours now so we can see the spikes. You can see the different times we test it. You can see that there is no, You can see the different times we test it. You can see that there is no, there's not a lot of incidences of 400 requests. It looks like we did have a few. Maybe someone was trying to programmatically access the endpoint without the API key. But the idea is that we're servicing a lot of requests without crashing, right? And we're not seeing any 500s, which is awesome. The downside, since we're not scaling, so we're only using one replica here, and you can see here that when we look in our configuration and we look at the, or sorry, when we look at our settings, we can see that we're not doing any auto scaling, right? So when we look at our settings, we can see that we're not, we're not doing any auto scaling, right? So when we have loads of requests, so if I just go back to that two hour window, right, we get these, these spikes in latency, right? So this is how that, that auto scaling can help us out because it can help us to better ensure that we have fewer spikes when we start sustaining a lot of traffic. There's going to be some lag time, of course, with the auto scaling, but it's not so bad. And you can see that like the median request latency is not so bad. Now, in the front end that we are using, you might see that it feels slow. That's because we're not using the streaming use case. And so it waits for the prompt to be fully formed before sending it back but the time to first token is not very long you can also see that under load we're using a lot of our card uh and uh you know very little of the rest of our machine but this is the idea right we can hammer this thing with those 500 requests at a time and it chugs along all right. So that's the idea. And that's how we can track the usage for us. We can also see our specific usage, how much time it's been up, you know, what our rate is and how much cost we've incurred. So for this, since we opened it before this event, it's been about two dollars and by the end of the event it will have been two dollars uh that we've spent to do this so not so bad uh but with this i'm going to pass you guys back to greg who will uh take us to qa but before i do i'll remember to uh say don't forget to like comment subscribe, subscribe, and ring the bell notification. We go live every Wednesday. We love to see you out here, and we'll switch you over now. Yes, thank you, Wiz. That was awesome to see. And we are actually going to do the final leg of our end-to-end prototyping next which constitutes the ui show you a little bit of what's going on behind the scenes with chainlit and for those of you that are wondering well chain, Chainlit or Streamlit, it's like, those are stupid UIs. Those aren't scalable, right? Well, I wouldn't be so sure. I was talking with Dan, the co-founder and CEO of Chainlit, not that long ago. And he told me that we had someone deploy an app that had 1 million users per week, Chainlit. So get some, Chainlit. That's pretty cool. And I certainly have heard, I don't have any verified quotes, but I've heard that Streamlit doesn't do too bad as you scale up either. It's like, really, what scale are you servicing? And when does this break? In the meantime, why not use it as a sort of member of your team instead of hiring a UI expert and designer? It's also got a really simple syntax that you can start the chat, you can keep the chat going, and that's pretty cool. Once we have a chainlet front end, we can deploy it to Hugging Face's spaces. We've talked about this before. We can one-click deploy. You don't have to have a GPU to deploy a space, but you can go ahead and attach GPUs super easily and even via hugging face inference endpoints to close us out we're going to go back to prompt engineering to show us the sharing part the oh so important sharing part of end-to-end prototyping as we get our work in front of others who are non-technical prompt over to you man okay all right so we're going to talk about how to quickly create this front end. So as Greg was saying, Chainlit is pretty amazing. It's very simple, straightforward. So here, you first need to provide your prompt templates exactly the same what we used during the fine-tuning process. In this case, you want to make sure that you stick to the LamaTree's prompt template so that it actually generates good responses. Then you're going to provide your API URL. So the same thing that I showed you before, right? Some headers for authorization, okay? Then pass on our query. This is the prompt template, which the user input is going to come in. That is going to be formatted. And we'll create a query based on that. And we will create a simple inbox where we'll get the input from the user and pass it on to our API to generate a response. Okay. So this is how it's going to look like in terms of the final UI that is deployed on a Hugging Face basis. And we can test this out. So let's say we're going to test it out again. Now you just want to make sure that this is running. We can pass this. All right. This would take some time, but we do get a response. All right. So this is how you deploy it. Back to you, Greg. Awesome. Yeah. Easy, peasy, lemon squeezy, quick deploy. Awesome to see that. As we sort of think about and reflect on the session here a little bit, we can say, okay, what did we learn? Well, we got pretty existential today and we learned that the greatest wealth is to live content with little. In other words, the most valuable flex is being low-key rich in your own mind. We also learned that being born not knowing and having little time to change is the nature of our existence. In other words, we're all noobs and we only have a hot sec to upgrade that. We only had a hot sec to upgrade our end-to-end prototyping game today, and we did it through the lens of building, shipping, and sharing. We hope you enjoyed each piece. Of course, we've got lots of content that digs deeper into all of these things. The star of the show today was API endpoints that solved the problem of running inference on GPU. Cloud computing not required, but you can use it. And this gets us right into Q&A. We've got a few questions in this regard. I welcome Prompt and Wiz back up to the stage here. And I want to sort of focus in on that piece of it first here. Aren't there inference endpoints in AWS through Bedrock? Wiz, like, duh, why didn't we just use that? I mean, there sure is. Yeah, there is inference endpoints through everybody. You know, and in fact, if you look very closely, the inference endpoints that we're using today are secretly going through AWS anyway. You know, it's just click the Hugging Face UI to get them. The idea is pretty straightforward. Like the Hugging Face ecosystem is set up to hold your model, your data, and it can also serve as the inference solution for that model. So that's why we went with Hugging Face. Now, you can, of course, use AWS, Azure, whatever your favorite cloud provider is, if you want to. But we wanted to show a simple, few-click way to get the same thing without leaving the ecosystem you're training your model in. Okay, all right. Keep questions coming, everybody. Prompt taking this one over to you. You've done a lot of fine-tuning on your channel Can you share tips about how to decide the amount of data for effective fine-tuning? of any given LLM Okay, this is actually a very controversial question So some of the research that I've seen is that you could um so so the some of the research that i've seen is like you could get a really good fine tune out of the smaller models up to thousand examples but those thousand examples have to be extremely good right you can't just uh like or the traditional terms were like garbage and garbage outright so if you have a million examples of bad data uh it's not going to learn. But if you have some good examples, up to, I think, a thousand examples, they seem to work pretty good, at least for the smaller models. And I think, gosh, what were you saying the other day? You were talking about fine tuning to us and you were talking about how the one point, the one in 3 billion models, they even sometimes they're suffering if you do too much quantization on their performance too. Isn't that, you said something along those lines. Yeah. Actually, like there's a couple of things. That's a good point. When you fine tune a model, a lot of people use quantized version of those because that gives you better inference speed and memory usage is pretty small, right? Now, the problem is if you have a smaller model, like, for example, 5.3, that is going to suffer a lot more if you quantize it heavy. So if you use like eight bit quantization versus four bit quantization, you would see a drastic change in the performance. Same whole tools for you in the bigger models as well. If you quantize bigger model, like a 16 bit, eight bit, four bit, you will see a difference in performance, but for the smaller models, it's a lot more prominent. Fine tuning legend over here, Prompt. Yeah, follow at Prompt Engineering on YouTube for more fine tuning tips. Wiz, to you on this one from Jorge, how much do you expect this setup's inference to cost? We saw it costed $2. Can we talk about this on a per token basis how should we be thinking about the cost model here that we're using yeah i mean for how you face it for some points it's easy uh if the model is going to be up for four hours it's going to cost you four times the hourly cost if it's uh going to be up 24 hours a day do the math if you want to think about it in per token costs you can think about it by uh thinking about how many tokens you expect to have through that period of time but it's it's a hourly or per unit time cost so uh in in this case i would i would more focus on that uh when it comes to like the replica and the auto scaling that is where it's going to be a little bit more complicated. In that case, I would think about doing the math to find it per token cost. And that's going to help you decide. Let's say you receive enough requests that you're going to have four replicas. Well, that's going to cost you 4x. How much time should you sustain that for? Yada, yada. So it gets more complicated into there. For this specific one though, it's going to be a dollar per hour, regardless of how many requests you get. And is that per hourly rate, is that something that you would look at as you went to cost optimize as you started to grow? Is that one of the places you would start? Oh yeah. If we know, for instance, let's say we need more than one replica at a time to service our requests with decent latency, right? Then we want to find times during the day where our users are really slamming the endpoint and then spin up more instances then and spin them down when they're not. So there's a there's a lot of different ways you can handle this in order to keep a cap on cost at the at the cost of some latency for your users nice okay okay um uh fee slides are available we will drop them on youtube in the comments as usual and then prompt i you, we got this question in the chat here from Edgar. It seems like it just will not go away. What do you think about the value of doing a prototyping event within a cloud provider? Why do we avoid that today, in your opinion? Well, I think it's a lot more complicated it will not be able to cover those in an hour right uh to begin with and i think for beginners it's really good to have something up and running uh and then you you can think about uh these cloud providers like for a lot of companies for example right uh they don't really have even an option of which cloud provider to choose. They will stick to the existing cloud providers. So even let's say for companies on GCP, they're going to stick to GCP, even if AWS is offering better prices, because usually they have long-term agreements in there. So to begin with, I would say just use something like this which is pretty easy to set up but once you pass a certain point where you want to look into optimization and stuff like that, then yeah, you can explore all these things and there are some really good tutorials out there. I want to kind of double click in on this as we end. Thank you everybody for the questions and we'll do our best to get the ones that are relevant to fine-tuning over to Prompt to answer async and sort of make sure that we get answers to some of the more interesting questions here. But one of the things Wiz and I talk about Prompt a lot is like the actual job of an AI engineer and where it stops and what it's not. And I think a lot of times when folks are asking for the cloud computing platform content, it seems that they're asking for the content that's going to be stuff that they're not typically doing in their day job as an AI engineer. Correct me if I'm wrong on this contextualization, Wiz, but this is the core idea that we're pushing. What are your thoughts on this? Where does the AI engineer job description and job duties, where does it run out? Is it at the cloud computing platform? Is it well within the cloud computing platform? How much cloud computing platform do people need to really know? So I think about this question, like, let's say a couple of years ago, people would think about data scientists. Okay, so data scientist role was really dependent on which company you're working at. If you're working at a startup, a data scientist would be like an end to end engineer who would be like doing the data cleaning, data extraction, model training, then deployment, right? And if you are working at a big corporation, then data scientist is more of like, okay, I have to create models. So for AI engineer, I think we are going through a very similar phase, right? It's really good to know different, I think, technology stacks. It's good to know the deployment part as well, but depending on which organization you are working in, I think your role is going to look very different. Right? Yeah, so that that will be my answer. Like I know it's not a clear cut answer. But yeah, it's a hard question. Yeah, right. It is it is. And lots of people want to know the answer. And you know, and basically, I guess, you know, it's like, there's a reason why we don't teach, you know, AWS, or GCP, or Azure, because again, we don't teach you know AWS or GCP or Azure because again you don't get to pick it and if you want to learn it you know where to go you know you know where to go go ahead and just learn away on AWS on GCP on Azure with their materials they are the experts and their people that have full-time jobs dedicated to making you understand that ecosystem just to add to this like I think it's really important to know the concepts. Like for example, we covered deployment today, right? So if you know how to deploy a model or like what it takes to deploy a model, then what platform you choose is kind of irrelevant, right? You actually know, need to know the steps that you need to follow, right? And as you said, there are a lot of resources available depending on which platform to choose. Yeah, we just, we got to drive the car. We don't always have to build the car. As LEGO Block AI engineers. Well, thanks Prompt. Thanks Wiz. And that is the bookend of our end to end prototyping applications event today with Llama3 and Hugging Face special guests, Prompt Engineering. Don't forget to like and sub, ring that bell, baby. And if you enjoyed this, you want to keep building, shipping, and sharing with us, join the AIM community. We would love to have you. There's folks talking about projects that they're building, shipping, and sharing all the time. It's really exciting to see what's starting to take hold, led by community for community. Join us in there. And you can start learning for free with us. In addition to our YouTube, we've collected a cohort on LLM Ops that we taught in 2023. We're looking forward to sharing more cohort-based courses in an open-source way in the coming months. And if you are ready to accelerate right up to the edge, we always have our AI engineering bootcamp that's ongoing. It's a seven-week accelerated learning experience from your first LLM application to your first RAG, first agent, first fine-tuning, laying chain, Lama index, assessment, production, and demo day. We've got a demo day coming up. We'll invite all of you guys to join live and definitely check that out if you're interested. We've got lots of other cool stuff coming down the pipe, but for now we'll see you next Wednesday live on YouTube. In the meantime, keep building, shipping, and sharing, and we will most certainly, as well as Prompt Engineering, do the same. Thanks so much, everybody. See you next week. Bye.
End-to-end Prototyping with Llama 3
3,726
AI Makerspace
20240502
Join Dr. Greg, The Wiz, and Prompt Engineering for an exclusive YouTube event! Dive into the complete journey of building, shipping, and sharing AI applications with the Hugging Face Hub. Learn how to curate datasets, fine-tune models, and deploy them with robust API endpoints. Discover how to enhance your AI projects with user-friendly interfaces and finalize them for production using Docker containers. Whether you're new to AI or looking to sharpen your skills, this hands-on session will equip you with the knowledge to streamline your workflow and bring your AI solutions to life. Don't miss out—click to join us and transform your AI concepts into real-world applications! Event page: https://lu.ma/llmappshf Have a question for a speaker? Drop them here: https://app.sli.do/event/3VAruCSULjSYF2UT7BMoof Speakers: ​Prompt Engineering https://www.youtube.com/@engineerprompt Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/xm9MsAHV7oJsTxvT7
2024-06-13T21:58:36.909152
https://www.youtube.com/live/wYZJq8CNmTw?si=XGolT3th2UIegPtd
Hey Wiz, what do you think are the best adventure films out there? That's a tough question to answer, Greg. Yeah, yeah, yeah. What if I gave you some data? What if I said, well, between Lord of the Rings, The Hobbit, or Dune, maybe Harry Potter? How would you pick the best? it or Dune, maybe Harry Potter? How would you pick the best? Well, I'd probably think about maybe the ratings and, you know, who is in it and, you know, what happens in the movies, like how well I enjoy them. You know, it's still a tough question, though. So you do like some reasoning to try to figure out, well, maybe there's some sort of critic scores. Maybe those are quantitative. Maybe you might think about the plot and kind of the the details of the story, what people thought about that. There's essentially a lot of data you might try to consume to really answer this question best. Isn't that right? That's absolutely true. Yeah. Yeah. Yeah. So you kind of have to reason through different types of data, almost doing what we might call agentic behavior. Isn't that so? That's right, Greg. Yeah. Yeah. Yeah. Yeah. Well, I think you think we can build an AI to answer this question of what is the best of the best today? I think we can definitely try. Yeah. Yeah. All right. Well, that's what we're going to do. We're going to see you back when we're ready to start building. Welcome everybody today to Data Agents with Llama Index. My name's Greg, and that was Chris, aka Dr. Greg in the Wiz. We're from AI Makerspace. Thanks for taking the time to join us today. Today we're going to try to build an intelligent AI system that uses agentic reasoning behavior to discern. Well, what is the best couple of movies out there? You've all maybe recently seen the Dune series, and we're curious to see how it stacks up against some of the classics. And we've got the perfect tool in store today to be able to look at this. and we're curious to see how it stacks up against some of the classics. And we've got the perfect tool in store today to be able to look at this. We're going to use data agents with Lama Index. By the end of today, you'll understand the Lama Index core constructs of the framework and how they come together into data agents. We're going to do a deep dive on the technical architecture. We're not going to get too high level pie in the sky with the concepts today. We're going to spend a solid hour in Llama Index building this thing out. We hope you enjoy it. Let's get into it. If you have questions along the way, please drop them in the Slido link or directly in the YouTube live chat, and we will do our best to get to all of them today. All right. So as we align our AIM for the day, we really want to understand these core constructs. That's what we're going to build with. That's what we're going to build on. Then we're going to go ahead and build a complex RAG application that uses agentic reasoning. We're going to build indexes for both qualitative and quantitative data, meaning movie reviews and context about those movies. We're going to leverage metadata filtering to be able to move between these two indexes, and that is going to be the agentic reasoning piece of our application. And hopefully we can get an answer. What do you think the best adventure movies are out there? Maybe you can throw it in the chat. So today we're going to dive into the core constructs and then articulate how to build out the semantic and SQL-based separate pipelines that we can combine and navigate using metadata. All right, so LAM index, just super high level. What are we talking about? This is the data framework for LLM applications that benefit from context augmentation. This is the keyword they are using today. Now, don't get confused. Context augmentation is nothing more than retrieval augmented generation or augmenting what's in the context window. That's called in-context learning. Now, when we do fact checking, when we try to make sure we're not hallucinating, it's all about reference material. That's sure we're not hallucinating, it's all about reference material, that's what we're augmenting with here. So, you know, again, what's the best? We're going to need some fact checkable information to back up our response by the time we're done with this thing, right? And so we want to get good answers that are rooted in reality, rooted in facts. Overall, the RAG process is to ask a question, feed that question into an embedding model to get a space that is to get a representation in embedding space of our question. Then we're going to look for similar information within our vector store. We're going to set up a prompt template that's ready to receive reference material. And then we're going to inject that reference material back in natural language before we put everything into the LLM. Now, this process is the retrieval process. And this process really does lead the way because retrieval done well makes generation go better. It improves our ability to leverage context. And again, this RAG process is mostly completed at this point, but we haven't done the G, we, this is the next step towards production ready data, context, augmented LLM applications, production ready. And so this is very cool because they're doing a lot of cool things. But one of the things that they did is they said, well, the main abstractions are going to be in core. That's really what we want to focus on today. And then there are a lot of other things that are going on, a lot of things you can interface with. We'll see a few of those today. But we want to, again, focus on the core so we can really build up this idea of data agents. Now, when they released Lama Index v0.10, they sort of showed this image, and I really like it. We're going to focus on the core package today, but we're also going to see that we're leveraging a number of other pieces. We're leveraging the agents, we're leveraging LLMs and embeddings. Of course, Lama Index doesn't have LLMs or embeddings in its core because it doesn't create LLM and embedding models. It's a framework that allows us to leverage those LLMs and embedding models. We can also leverage indices from other companies. For example, we'll leverage Quadrant today. And VectorStore is a particular type of indice. That is the type we're going to use today, is the Vector Store. And then, of course, we can have access to tools. We'll see one tool that we'll use today to help us actually build that metadata filtering capability, namely the OpenAI functions API, the function calling API. But before all that, let's focus on the core of Lama Index. Let's focus on the core constructs. Let's walk through this step by step. The core constructs are associated with steps of the process, the same process we just diagrammed out. We need to load our data. we need to index all of the data, we need to store it, and then we can ask questions against it. The structures within LLAMA Index Core we need to be able to leverage our nodes, embeddings, indices, retrievers, query engines, and data agents. Let's walk through step by step. First, we can think about loading. And we're going to load with nodes. Loading with some noting. And loading is nothing crazy it's just ingesting your data but the actual name for the chunks that we're going to use in llama index are nodes so these are some nodes pictured within a database slow vector store here database. It's a little vector store here. So nodes are the way that data is ingested. It's the how of data ingestion. So nodes are very, very important in Lama Index and everything builds on nodes. They're nothing crazy. It's just a chunk, but it does have metadata associated with it. have metadata associated with it and metadata allows us to be more intelligent down the line because it's data about our data it's up a level of abstraction we can use that later as we layer in more and more complexity the way we get nodes is we parse up our data. We use generally a node parser. This does the chunking of our documents. Okay, not very complex here. Once we get data loaded in, we're ready to do some indexing and we need to leverage embeddings to do this. Now, the process of indexing is simply the idea of structuring our data so that down the line, we're going to be able to easily do retrieval. Okay? Remember, retrieval augmented generation. So indexing is very key because the better we structure, the more easy it is to retrieve. And the way this looks in Lama Index is we, of course, split our documents into chunks, nodes with our node parser, and then we're going to create embeddings for each chunk. Now, each chunk of document is a node. Now, that node might have metadata associated with it as well. Often it does. We're going to pass our chunks through an embedding model to get a representation of that language in embedding space. And all of this is going to allow us to get essentially a list of nodes. This list of nodes is sort of where our indexing process is completed. Well, what's next after we've done the indexing? Well, I mean, we don't want to do the indexing again every single time we have to build something. So after indexing, we want to go and we want to store our indexed nodes within an index. Indices just being the plural of index. Now, again, storing is just about avoiding re-indexing. We could run the entire thing every time, but that would be dumb because we would just be wasting compute to get representations we already had. So this use of embeddings that actually allows us to store our list of nodes goes directly into the vector store. So we take our list of nodes and this vector store is a specific type of index. It's the most common type. It's the type you're probably going to be building with, but there are other types of indices. We don't really need to worry about those today because we're going to use vector stores. But that's really all there is to it. Storing is putting stuff in a database. Once we have that stuff in a database, what's next? Well, we want to be able to wrap our vector store or our database in a way to get stuff out of it, in a way to actually do the retrieval. This is where we need a retriever to allow us to do our querying. And of course, this is just asking questions, but we're going to ask questions and get answers based on the LLMs and data structures that we are using through this process. So the way we did the embeddings, the way we set up the data structure, the way we decided on metadata, all of this matters when it comes to what is retrieved. And so the retriever is simply fetching stuff out of the vector database. It's getting nodes. And the nodes are then often brought back and synthesized using these synthesis modules in Lama Index. Now, it's important to understand, and we've talked about this quite a bit recently over the past few weeks, that when we think about this querying process and we think about retrieval, this is what actually gives us the context augmentation. And so both the chunking, AKA the creation of nodes and the retrieval, AKA the retriever affect this context augmentation. Because we're just wrapping the vector database in a way to find stuff that's similar to our question. Or in the case of a more complex system like we have today, route to the proper database to find stuff that's relevant to our question. And so when we think about kind of the way this is split up, we can think about there is the chunking and then there's the retrieval. And we want to optimize both of these. And it's hard to kind of find perfect ways to visualize this, but we talked last week, if you joined us for advanced retrieval, about the fine line between chunking and retrieval existing at about the vector store. This is where we're going to sort of dive in today. For more on that, check out our advanced retrieval video from last week. But sort of going back to where we started here we're just talking about this process now today we're going to do something a little bit more complex but let's stack this in terms of llama index core constructs let's return well after we've done loading indexing storingying, we're not going to be doing evaluation today. It's time to talk about the query engines. And the query engines are simply generic interfaces that allow you to ask questions over the data. They might be built on many indexes, but they're built on at least one. The index and retriever form the basis for our query engine. It's hard to visualize query engines, to be honest with you. I haven't found a great way to visualize it, but here's another way that we could visualize it, taken directly from a blog from Jerry, CEO of Lama Index, from last year. The query engine really is kind of the heart of what's going on because the query engine as a unit, as a sort of pattern that we can leverage, allows us to build very, very interesting, much more complex applications. Well, that's a good lead in to the idea that we've been trying to get to all along of data agents. And data agents take this idea of a query engine and expand it out. So data agents, I like the Lama Index sort of nomenclature here, they call them LLM powered knowledge workers. They're doing this automated reasoning or perhaps semi-automated reasoning and decision making to decide, okay, I'm going to do some reasoning to decide, do I go hit an API that I have access to, maybe a Gmail API, a Slack API, a SERP API for Google. Maybe I'm hitting DuckDuckGo. Maybe I'm hitting something else. Or maybe I'll just go ahead and tie a couple of query engines together. And now you see the query engine is contained all within a single block here. But there's a lot that goes into a query engine. In fact, there's potentially an entire rag application behind that query engine. And so what we're going to do today is we're going to sort of expand this out. And we're going to use what's called the OpenAI function agent in Lama Index. And you see straight from their docs, agents are, quote, a step beyond our query engines a layer of abstraction up from the query engine and it allows us to do reasoning it allows us access to tools to api's and so this is built on the open AI function API which allows us to sort of intelligently call functions which is great. It's fine tuned for that. And what we're going to do is we're going to leverage the OpenAI Functions API to build this OpenAI Function agent to allow us to look at two different query engines that we build up and we're going to decide which to use based on the metadata associated with those in short we're going to build a functional tool that helps us decide on which query engines we're going to use and And remember, query engines are built on one or more indexes via retrievers. That's why we call this the auto retriever functional tool. It's going to allow us to look at our question, look at the metadata associated with our query engines, select not only the right type of data, but exactly which aspect of that data, and then dig in and try to return the best possible context. So I'm beginning a little abstract. Let's sort of bring it down a little bit closer to earth here. So today's combined build in short is going to be up a layer of abstraction. We're going to take an input. We're going to go to a data agent. It's going to decide, should I use the semantic query engine or should I use the SQL based query engine? Should I use the quantitative data, the SQL based data, or should I use the qualitative data? And we're going to build these indices and retrievers out with different types of data. So our semantic rag pipeline is going to be based on Wikipedia data. We're going to take Wikipedia data on Dune films 1 and 2, Lord of the Rings films 1 and 2, Hobbit films 1 and 2, and Harry Potter films 1 and 2. one and two, Hobbit films, one and two, and Harry Potter films, one and two. Then we're going to create an index. We're going to prep the Wikipedia data. We're going to build a vector store index. We're going to build a vector index retriever and a retriever query engine. Look at that. All the keywords all at once. Then we're going to go and we're going to get some quantitative data. We're going to do this using IMDB ratings for Dune 1 and 2, Lord of the Rings 1 and 2, Hobbit 1 and 2, Harry Potter 1 and 2. And we're going to build out that index. This is pretty cool the way this works because we're going to download the data. It's going to be in CSV format. We're going to create data frames just using pandas. And then we're going to use the SQL database index that's built into LOM index. And then the very powerful and very cool natural language to SQL table query engine, where we can ask questions in natural language and query our SQL data directly. We're going to use OpenAI models to get it done today, including the new embedding model from OpenAI. And that's really all there is to it. So again, let us know. What do you think the best movies are? One and two, because it's not fair to all of them. Only doing one and two. Come out. Let's see what the AI thinks as we head over to Wiz for some agentic rag with Lama Index and data agents. Wiz, over to you, man. Yes. Okay. Hello. Thank you. So let's look at the notebook. Basically, we've got exactly what Greg said. The idea here is that we have this idea of, you know, these data agents, so many different agents, right, that we want to work together in some kind of, you know, in some kind of like way that makes sense. So what are we going to use? Well, first of all, we're going to use a few new terms that you might not have been so familiar with. And so Greg's already done a great job explaining those. They're explained a little bit more in the notebook, but that's fine. So first things first, we're going to need to do, this is a lot of images and a lot of stuff going on here, obviously. So we're not going to, you know, it's a lot simpler in the code than we're seeing it here, right? So let's first move to the main boilerplate. This is just necessary. We just have to do it. There's no way around that. You know, there's not really, unfortunately, there's nothing we can do about that. The next thing we need to do is get our libraries. Now, you'll notice we're getting Lama Index and OpenAI. So Lama Index makes sense. OpenAI makes sense. We're also going to grab Lama Index readers in Wikipedia. We're also going to use the QDRIM vector store today. So the QDRIM vector store is basically a really fantastic VectorStore that's going to help us to understand exactly what we're storing everything in. When it comes to VectorStores, and this is an important point, right? The actual VectorStore that we're using is pretty, it's pretty important, basically. And so the reason that is, is that when we think about storing all this information and thinking about like, you know, oh, well, we want, you know, we want some, you know, interesting information or data or something like that, right? Well, we want to be able to say, can we access that metadata cleanly, right? And so the way that we do that is we can use Qdrant. It's going to basically handle all of that for us. Very convenient. Obviously, it's a, you know, it's a great way to do it. So there you go. We're also going to use SQL alchemy and pandas. Now, SQL alchemy is going to be the thing that we basically just, we want to be able to, you know, have this in-memory vector store. That's pretty important, right? So the way that we're going to get that is by using SQL alchemy and pandas is going to help us to transition from our CSV to our actual SQL database. We also have this optional piece, which is optional. So you don't have to use it if you don't want to. You don't have to think about it if you don't want to. But we do have an optional piece where this is all going to be available to us through the, you know, through the actual 1B integration with Lama Index. So we can see all of our calls that we're making. We can keep a history of all the calls that we're making, which ones are failing, which ones are succeeding. But this is an optional piece that we can ignore if you would like to. So next up, we have our environment variables, and we can also set our 1B callback if we're going to use that. And then next we have our settings. So if you remember kind of, you know, these previous you know, these previous versions that we've used, right? So this is to say, say we're thinking about the idea of, you know, what LLM should we use if we don't specify an LLM, right? Well, the idea is that we actually want to, we would prefer to use that version, you know, of the model when we don't specify a default. So you can think of these as kind of like defaults, basically. So we've got a set up Lama index. We've indicated what things we want to use as our defaults. Now we have to create an actual index. Creating an actual index is not a difficult part. Basically, it is the way we can think about this is as follows. So we have this idea of vectors and vector stores and all this other stuff. vectors and vector stores and all this other stuff. And what we want to do is we want to make sure that we have this set up in a way such that we have the index created cleanly, right? So the way we're going to do this is we're going to use the LUM index core to do that. Now, before we can actually set up an index, we need some data. The data that we're going to use today, of course, is as described. This is just a bunch of different movies. These are the disambiguated Wikipedia titles that the Wikipedia tool succeeds. The big idea here is basically just fetch a lot of Wikipedia articles about these movies. Okay, first of all, that's done. Second of all, right, we need to set this up cleanly. We need to set it up so it works well. For this example, we're going to use the in-memory QGIT client. That in-memory client is very powerful, but it does, you know, come with the fact that it's running all in memory all the time. And it's in this session. So, you know, for production environment, of course, we want to set up a separate QGent client. But for this particular instance today, just so it runs nicely in the Colab, we're going to use the memory QGent client. The next section is just setting a collection. So a collection kind of is like metadata on top of metadata. So not only do we have metadata, but we also have collections. So we can store different kinds of information in different collections, and we can point those collections to different vector stories. Now, we're not going to do that today because we're already doing quite a bit, but it is a thing we can do and it is pretty dope. You'll notice that when we create our collection, we have to be manually setting our vector parameters. This is because Qt is going to expect that all of our vectors have this size. This size is based on the default embedding model that we chose, which if you remember, we set up here in settings, and that's text embedding three small. And so it just happens to have a dimension of 1536. Now that we have our collection and our client, we can create our QGRT vector store. Now our vector store is going to be powered by QGRT, right? So we talked about the benefits of that. Comes with all this metadata stuff out of the box, works really well, scales really well. You'll love to see it. We're then going to create our storage context. Our storage context is what it sounds like. It's context relating to our storage. I know I just said it backwards. But the idea is that we need to have some kind of way to tell Lama Index how it should interact with this vector store. Right? So, when we create our vector store index, we have a really detailed view of this vector store, right? So, that's what that storage content context does. It acts as a nice layer between the vector store index and the thing that's powering our vector store index. We're going to initialize this as empty, you might notice. So there's no docs here, which is, you know, not useful, right? We know we have a lot of docs, but there's none here. And we're going to go through the process of adding nodes manually, because we'd love to add this additional metadata tile, right? So the idea is we have some ingestion pipeline that's going to convert our documents into nodes, and then we're going to insert those nodes into our vector store. And the reason we're doing this manually is because we want to add some metadata. Now, of course, you can add whatever metadata you want, doesn't matter, but you have to follow this pattern if you want to add that additional manual metadata. Now, of course, you can add whatever metadata you want, doesn't matter. But you have to follow this pattern if you want to add that additional manual metadata. You'll also notice that our adjacent pipeline is going to convert our actual, you know, documents, as you can see here, into nodes, which is useful, because we need to convert them into smaller pieces, so that they're more compatible with our RAG pipeline. Okay. So now that we've inserted a bunch of nodes, and you'll see, just to be clear, right, we take each movie and its corresponding, this is the title from the movie, right, in the wiki doc, and we ingest it, we turn it into some nodes, and then for each node, we add this metadata, which is related to the title of the movie. And so now we have an index that has a bunch of nodes and the nodes are associated with their title. So now we can create a simple query engine, right? So we can see our simple rag is just our index.as query engine couldn't be easier. You might think, well, what LLM are we using? We didn't tell it what LLM to use and you're right, but we did. Remember, in our settings, we specified this default LLM, and that default LLM is going to be OpenAI. Pretty cool. All right. So now that we have that, let's check out what the actual prompt is. The prompt is pretty simple. Context information is below. Here's some context. Given the context information, not prior knowledge, answer the query, query, and then it's expecting some answer. It also has a refine prompt that can be useful if we want to leverage it. We won't be specifically leveraging it today, however. So we can ask questions like, who is the evil wizard in the story? And we get Lord Voldemort. Now, already we can kind of tell what a potential issue is right if we're dealing with like uh six different movies that all have evil wizards this is not necessarily the response that we were hoping for or looking for right uh now we could also ask questions like who are the giant beam beings that roam across the world and it can say uh you know stone giants are the beings that roam across the world but again i mean like dune has sand worms we've got ants you know so we're kind of missing this specific information based on the movie that we're providing which is uh you know not necessarily the most useful thing in the world so let's make that better by using the auto retriever function uh functional tool uh got an image here that you see using the auto retriever functional tool. Got an image here that you see in the slides. Basically all this is saying is we're going to do some filtering before we return our list of potential contacts. And this filtering is going to be based on our LLM's decision. So when we go to our actual implementation here, you'll see that we have this vector store info. This is a collection of metadata, which is going to include the title of all of our movies, right? to our actual implementation here, you'll see that we have this vector store info. This is a collection of metadata, which is gonna include the title of all of our movies. And it's basically just going to say, hey, based on the metadata, you're gonna want to have some awareness of what you can choose as part of the metadata. So if we tell it like, hey, you can choose as part of the metadata, right? So if we tell it like, hey, you can choose the title of a movie, but we don't tell it what movies that we have, that's not a very useful tool. So we have to, again, whenever we're doing AI engineer type of work, we're building RAG applications, we're working with Lama Index, we wanna be sure that we're thinking about how is the LLM gonna interpret this information, right? How is it gonna leverage this information to give us a sweet answer? And so in this case, you can see all we're gonna do is let it know the titles of the movies we have. And then in this next piece, I'm not gonna spend too much time on this, but the idea is that this is just describing the function that we want to be able to call that's going to filter our metadata, right? So we have here some query, we have some filter key list and some filter value list. And then we're going to build this auto retrieve function. Now this is a function that lives on our side. And basically all it does is it's going to do retrieval based on a set of filters. You'll see it takes query, filter key list and filter value list. That might seem familiar. We just built that above in Pydanta. It's going to build a set of exact match filters based on the key and values we've provided. In this case, we're keeping it simple. Title is our only metadata field. So if it needs to filter, it's going to use title. And then we're going to use our vector index retriever, similar to what we did before. We're going to set up a query engine about it using a retriever query, or sorry, our retriever query engine. And then we're going to go ahead and query it with our query. And that query is going to take into account, right, this actual metadata filter. So we'll see an example of how this works in just a second. But the idea is all that's left, right, is that we want this to be in some form agentic. And so what we're going to do is we're going to describe our tool and when we should use it, use this tool to look up non-review based information about films. And then we're going to have our auto retrieve tool take our function that we've created, right? And that function that we've created again is this function. And then it's going to describe the tool name. This should be descriptive, right? Because the LLM is going to use this information to determine when to use this tool. We're going to provide our description, which I mean, it makes sense, but it should be descriptive. And then we're going to use our auto retrieve model, which is this bit here. Now, the reason this is important, this auto retrieve model from Pydantic is because we need open AI in this this case to be able to call our function reliably. And this Pydantic model helps us describe in great detail to OpenAI how we call the function, right? You'll see we have query. It should be a string. Here's the description of how that string should look. We have our filter key list. It should be a list of strings. And here's what it should be, right? So this is the idea. We have to describe to OpenAI exactly how to call this function. And that's what we've done using this auto-retrieve model that we built in Pydanta. Very cool. Okay. All that's left to do now is let her rip, right? So we're going to create an OpenAI agent from tools. We're going to pass in this one tool. We're going to have verbose equal true, just so we can see the outputs. And then we're going to say something like who started in the 2001 film, right? Now check this out. When we say who started in the 2001 film, we get calling function, semantic film info with args, query, cast of Dune 2001 film, filter key list, title, filter value list, Dune 2001 film, right? So we're gonna actually filter out every other movie in our vector database. We're only going to care about just Dune 2001 and not at a like implicit level where we're relying on the similarity to only be closely related to what, you know, not like that, but in like a literal sense, we are only going to query documents related to what, you know, not like that, but in like a literal sense, we are only going to query documents related to Dune 2001. And then we get the answer of the people who starred in the movie. Let's go. And then we can say, who are those giant guys from Lord of the Rings that roam around the forest, right? So in this case, again, we're going to use that semantic film info tool. We're going to say characters from lord of the rings the two towers we're going to filter it on the title and the lord of the rings two towers and then we're going to get a bunch of dope characters right one of the characters that we're going to get is uh is the uh tree beard right and so the giant guys from lord of the rings the two towers are likely the ents also known as tree beard Treebeard and the Ents of Fangorn Forest. So our query is kind of bad, right? Like, this is not a good query. But it doesn't matter because because of the filtering, we're able to not just rely on the on our context potentially containing Lord of the Rings or Lord of the Rings adjacent material. We can guarantee it does by using that metadata filtering. Very cool. So next, we're going to do the same thing, but instead of just filtering on metadata and then relying on vector search, we're actually going to do some SQL queries, right? So we're going to import all of our movie review tables as CSVs, and then we're going to load them as data frames, and then we're going to push those into our in-memory SQLite database. Again, if this was a production environment, we would not want this to be an in-memory SQLite database. This would just be some database that we have, right? But you can see we can have a ton of different tables. Like even this toy example here, we have like, you know, a lot of different tables in our database. We're going to see how well it works given that. So now that we've created a database with tons of tables and those tables are all based on those review DFs, right? And the review DF here basically being just this exact, you know, CSV file but represented as a pandas DF and then converted into a SQLite table, we have the ability to load that SQL database as a kind of index, right? And so what this means is that we're going to give it an engine. The engine in this case is the thing that points to our SQLite database. And then we have our list of movies that it's going to care about. So say we had more tables, right? Say we had like movies that were a hundred fold more movies. We had every movie, let's say, right? Well, we could have come up with a list based on just what are adventure movies in this case, and only use those specific tables in this particular index. So again, we have the ability to even above a level still think about metadata. In this case, of course, we just have the movies that we do. And so we're going to use every table easy peasy. Now we're going to create a query engine and it's going to be called the NL SQL table query engine. It might make sense. What this does is it takes a NL or natural language query and it converts it to SQL and runs SQL on that engine that we provided. And then it returns us a result. Very cool. We can wrap that all up in a query engine tool where we describe exactly what's happening. We can indicate what movies we have in this database or what tables we have in this database to be more generic. And then we can say, you know, what they contain info about. And we can also specify when this tool should be used. We can set this up. And then we can do things like what is the average rating of the second Harry Potter movie and get a response like Harry Potter, the Chamber of Secrets is the average rating was about 7.54 out of 10. And then we can ask questions like which movie has better reviews, Lord of the Rings or Dune. And we can get responses that Lord of the Rings has about 9.87 out of 10. And that Dune has 8.34 out of 10. So, you know, Lord of the Rings has better reviews compared to the Dune series. So we're kind of on the way to answering that question Greg had, right? Which is the best adventure movie. So what if we wanted to use both? Well, if we want to use both, all we have to do is use the combined pipeline. You'll notice that we still have our OpenAI agent. All we do is we add an auto-retrieve tool, an SQL tool, and then we can ask questions like, what movie is about a chamber? And what is the average rating of the movie? Right? So in this case, we're going to see a lot of tool usage. We're going to see the semantic film with args. We're going to go ahead and it's going to filter by all of our movies in this case, because this is a potential, you know, it could be any of these movies. And so then it's going to give us the output that Dune sometimes is in, you know, has things about chambers. Okay. It's also going to say, hey, sequel query, you know, take the average rating from Harry Potter, the Chamber of Secrets film, right? And then we're going to finally say to our semantic film info, to think about just the Chamber of Secrets film. And we're going to finally get a response of Harry Potter Chamber of Secrets is about a chamber. The average rating of this movie is 7.54 out of 10 or 7.2 out of 10. So the idea is, again, that we have this ability to make multiple calls. We can also see it working with two different semantic query calls. What worlds do Lord of the Rings and Dune movies take place in? We get the response, Lord of the Rings movies take place in the world of Middle Earth. On the other hand, the Dune movies feature various worlds, including Caladan, Arrakis, Gator Prime, and Arakan. So Arakan, sorry. So very cool, very cool. The idea is we can use multiple tools to figure this out. Finally, we ask the all-important question. Which of the following movie series is considered the best? Harry Potter, Dune, Lord of the Rings, or The Hobbit? Based on your answer on both reviews and non-review information, we can see that we get the final answer of, among the movie series mentioned, Harry Potter is considered one of the best movie series. Additionally, Dune is also considered one of the best movie series, okay? So we kind of get a tie, I suppose. In the LLM's interpretation, sorry to disappoint you, Richard, who said that you will not trust Rag if it pipped anything else than Lord of the Rings. Harry Potter, dude, they're tied for the best movie um in its impression based on the data that it has access to so that's the uh that's the idea that's what we're doing with with all of this uh it it's basically just using the lm right to decide when to use tools to decide which tool is most relevant and use that information to, you know, to make sure that we're using the right tool at the right time. That's that data agent, right? We have many sources of data. We need something that can be smart about picking when it uses what. And, you know, thanks so much for watching the code demo. Now, I'm going to pass it back to Greg. Before I do, Thanks so much for watching the code demo. Now I'm going to pass it back to Greg. Before I do, got to tell you to like, subscribe and ring the bell. YouTube does help. I know it's kind of corny to say that, but we're here every Wednesday talking about cool stuff. So thanks so much. And back to Greg. Yeah. Awesome. I got one thing I just want to chat with you about for a second, just to make sure it's clear to the audience here. So we built up this data agent. And the thing that we did is we said, well, this is the auto retriever functional tool, right? And we said, this thing performs auto retrieval from a vector database, and then applies a set of filters. And you actually did this. On just the semantic query engine. And then you also did it on the semantic and sequel based query engine, right? So that sort of agentic behavior can happen in all sorts of ways, and we demonstrated a few today. Is that right, chris that's correct yeah yeah so there's there's many layers here when you sort of put this reasoning engine at the front you know even within a single semantic query engine we saw well some of the movies have similar monsters some of the movies have similar you know caverns and caves some of the movies you know adventure movies have these similarities in common so you really have to look at your data and decide on the right metadata decide on the way you're actually assigning the prompt to the agent but also actually doing the prompting in your own query, right? Like it all comes together into better answers, you know? Absolutely right. Absolutely right. Okay. All right. All right. Well, let's wrap up and then we'll jump into questions here in just a moment. So everybody that concludes our event for the day. We've got a number of great questions coming in the chat. We saw the core constructs of Lama Index V 0.10 from nodes to embeddings, to indices, to retrievers, to query engines, to data agents. And we saw that data agents really are just that next layer of abstraction above the query engine, allows us to leverage multiple query engines also gives us access to tools one of those tools could be the open AI functions API so that's kind of an interesting thing as well so there's a lot more good stuff when we talk about metadata filtering reasoning during retrieval processes and agentic systems that we look forward to bringing you guys on YouTube live as we continue into 2024, really the year of agents. So if you got questions, please go ahead and throw them in Slido or throw them in YouTube. We will triage by taking Slido questions first. So please upvote those. And let's get Wiz back on the stage. Let's go. We've got a bunch of good stuff coming in. Some of the classics as well, people are asking. But the first one that I saw a while ago on Slido is, does the embedding apply to the text chunk in the node and the metadata or just the text chunk for the default case just the text chunk um you can definitely set it up in a way where you combine the metadata uh or you you also embed metadata i mean mean, of course, the sky's the limit all the time, but by default, it's the text inside the node. Okay, okay. Next question. Another anonymous one here. How do the different response modes of the query engine affect the agents afterwards? Any recommendation? Yes. So assuming I understand the question correctly, how it can respond can affect the agents afterwards. So depending on what information it responds with, how much information you're basically sharing in that message thread can absolutely impact the performance of the agents afterwards. When it comes to like the best, it's just going to be a use case based, but I would say like sharing any and all information that is relevant to help your agent is the best way to go. All right. Next question. Another technical one here. Do we need the storage context with the ingestion pipeline? The example of ingestion pipeline and LOM index website does not use it. Yeah. So basically what we're going to do is we're going to, we're actually going to create the index first. And so that's what we use the storage context for. And then we are going to use our ingestion pipeline to build nodes, which we will then insert into that vector store index we created with the context. So the way that we did this today, you absolutely do need that step first basically that's telling llama index what they can expect from our quadrant um vector store index all right all right um okay quick uh let's do a couple quick ones here is nodes just a llama index term i mean basically yes uh it's synonymous with like uh or it's close to synonymous with things like document from, you know, from classic NLP. In Lama Index, obviously nodes mean a little bit more than just that, right? The idea is that they represent some standard first class object. Greg already kind of went through that. But the idea is that the way that they mean node, yes, it is a Lama index specific term. All right. And then on data agents, it says the data agent seems to rely on open AI function tool capability, at least the one we demonstrated today. Which other LLM has a similar capability that can replace this open AI function capability? Anthropic? I mean, yes. So there are actually a lot of LLM providers now that provide that kind of, you know, that kind of functionality. So the thing that I would say is like, check your LLM provider, make sure that, you know, if you need it or want it, you have access to some version of function calling, some of them even use the same API, you know, as as, as is obviously the case, just because of the way that it works, OpenAI kind of being quote, unquote, first on a lot of these things means that they actually had a lot of, you know, a lot of people are adapting to the way that they do things. And so you can use even the same thing with some other LLM providers. But I would be, I would always recommend you to check your LLM provider to see what capabilities it has and how it exposes those capabilities to you as a user. All right. All right. Yeah. So speaking of either or, we're getting a ton of questions about, well, what about Langchain versus Llama Index? Let's go ahead and start attacking some of these. Could we have used Langchain instead of Llama Index to do this? I mean, we could have. Yeah, for sure. Could we have done it in the way that we did it with Lama Index? No. So the way that I would think about this particular implementation with Lama Index is that we're basically, we have a bunch of different sources of data. And our OpenAI function calling agent, right, is just choosing which of those sources of data to use. So tools are more akin to sources of data in Lama Index that I would say like, Langchain has a very generic version of a tool whereas these data agents are meant to pick between different piles of data more cleanly, if that helps. So we could have asked the same questions, but the actual like specific details of the infrastructure and the routing of the prompt to different places is gonna be slightly different. That's correct, yes. Okay, all right. And presumably if we're using the same reference information, it's also gonna say Harry Potter's the best, maybe. reference information. It's also going to say Harry Potter's the best maybe. I I'm not going to make that. I'm not going to make that claim. There is there is perhaps some bias introduced by the prompt or you know, it could be anything but it probably would sorry to disappoint those in chat. So just double clicking in on this Richard asked earlier uh generally okay I got Langchain I got llama index I'm doing some agents um how do I choose if I can sort of ask the same questions I would say uh choose whichever one so stick with whichever one your organization is already using is kind of like the cop out here, right? If you are making a decision on which to use, I would say the decision largely comes to what is it you're trying to do? If what you're trying to do is organize some kind of querying system across many different, various data sources uh that that you know have representations in many different ways so databases uh you know pdf blah blah blah you get the point right um then i would stick with llama index absolutely yeah if you're if you're looking for a more generic or or uh kind of free form agent or less strictly, specifically to analyze information across many data sources, I might think about Langchain for that. Okay, next head to head here, Andres. Llama index NL to SQL versus Langchain SQL agent. Is it right to compare these? Can we say one is, are these doing the same thing? Is one better than another today? The answer to your question is actually, which model are you using? And how good is that model at that task? Because under the hood, we're relying on an LL to do this operation llama index and laying chain uh aren't doing anything other than asking the llm to do it and so um if you're you know it really that question is about which llm are you using and I would say like you know gbt35 Turbo or more uh quote unquote or more look at the leaderboards if uh if you're if you're interested what or more might look like but the idea is that's that's kind of the strength quote unquote of model i would recommend or class of models so your anthropics your coheres your open ais but the actual tooling is just asking the model to do it between both of them so uh no no no sincere difference there okay okay speaking of asking models to do it between both of them so uh no no no sincere difference there okay okay speaking of asking models to do things what about open source we saw you know snowflake this week crushing it with the embedding models we got some we got some open source lms that can compete yet with this agentic stuff no um unfortunately not uh when it comes to complex reasoning, you just need a very big model or a very well fine-tuned model. So some shops might have models that work, but generally, if you're going to be using this kind of agentic behavior, it's better to use one of your favorite closed source models. That's not to say it won't work most of the time using something like a Mixtrel or something like a Mistrel, but it is to say that if you need that reliability, if you need that accuracy and complex reasoning and understanding, we're still thinking about closed source today. All right, all right, all right. Well, that about wraps up the time we have today. It looks like we did get another request for the Ragus notebook that we owe folks from last week. So we'll go ahead and make sure that we get that out to you guys. Thanks for the nudge, folks. We definitely appreciate it. Gets busy over here at AI Makerspace. For those of you that had other questions in the Slido, I'm not sure I quite fully understood them. Feel free to drop those into the YouTube comments and we'll get to them asynchronously. But thank you so much for joining us live, everybody. And Wiz, thanks for joining us today for the Q&A and the code demo, per usual. All right, now that you've already liked and subbed, please go ahead and join the AI Makerspace community. If you're not there, we just passed over a thousand folks in Discord, and we'd love to have you join the party. There's lots of stuff going on all the time, and we're continuing to iterate more and more. By the way, we're doing an interview in about 30 minutes with one of our AIM community members that's been successful. You might want to come live and check that out. And if you're ready to start learning for free, a little bit deeper into all these concepts, you might start with our LLMs in Production Cohort 1 course that we open sourced not long ago. And if you want to really, really accelerate and give yourself many, many, many trail heads on which to learn about prototyping and production AI. Check out our AI engineering boot camp. That's our flagship course today. Although we've got a lot more courses in the works coming down the pipe as we stay out on the open source edge in 2024 and beyond. So thank you, everybody, for coming. If you have feedback, please let us know. I know we got a couple of suggestions on future events. Keep those coming, please. And that's it. If you're doing anything this week, don't forget to not only build and ship, but to share. Tag at AI Makerspace if we encourage you to do something this week. Tag me, tag Chris, tag folks in the community. We'd love to amplify you and help build your personal brand, help you achieve your goals as you move forward into your next phase of generative AI career. All right, everybody, that's a wrap. Till next time, keep building, shipping, and sharing, and we will do the same. See you on YouTube live again soon. Bye, guys.
Data Agents with LlamaIndex
3,671
AI Makerspace
20240418
Dive into the future of AI with our groundbreaking event on leveraging agents in LLM applications for 2024! Discover how to skillfully integrate agentic reasoning with advanced techniques like RAG and fine-tuning to architect applications that deliver both performance and cost-efficiency. This session offers an in-depth look at the innovative LlamaIndex v0.10 and its revolutionary approach to AI engineering with "LLM-powered knowledge workers." Learn how to construct complex RAG pipelines that masterfully navigate both structured and unstructured data, leveraging semantic pipelines, NL2SQL tooling, and OpenAI’s cutting-edge metadata filtering. Perfect for AI Engineers and Leaders aiming to enhance their projects’ bottom-line value through smart system architecture, this event is a must-attend to stay ahead in the dynamic world of AI. Click to join us and transform your understanding of AI application architecture! https://lu.ma/llamagents Have a question for a speaker? Drop them here: https://app.sli.do/event/5959y4pV8cT79dKzBs42Hr Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/iqwYN9pEUAxUi4Av9
2024-06-13T22:04:21.714727
https://www.youtube.com/live/eh1_CKLi3jw?si=1P8Ha0M9kRvaScMP
Hey everyone, today we talk LangSmith, the end-to-end production LLM application tool leading the way in LLM ops. In this event, you'll learn how to prototype and evaluate a RAG system with Langchain, providing a baseline for improvement. From there, we'll see how Lang chain and Lang Smith can be used to improve LLM applications in production. We'll see lots of features, lots of tools, we'll cover the core ideas. And we're going to go ahead and really get into it soon today. My name is Greg, and I'm founder and CEO of AI Makerspace. Of course, I'm joined by my man, Chris, the LLM wizard and CTO. He'll be up shortly as we've got a couple demos for you today. Thanks for taking the time to join us. If you've got questions along the way, go ahead and get those into Slido and upvote your favorites. We'll have time for Q&A at the end of the event. All right, let's go ahead and jump right in. We've got an hour together, and to learn Langsmith, we're learning basically everything all at once. So let's align our aim for the day. And as we always do, by the end of this session, you will understand how to baseline RAG applications with evaluation and end-to-end testing. You'll see how to improve RAG applications through quantitative metrics and through leveraging human feedback, developer feedback, and annotations within LangSmith. We'll get sort of a broad overview of exactly what the space looks like and exactly what we can do today, sort of at the edge of production LLM applications. We're going to break this up into two key areas. The first one is more of that AI engineering, that prototyping. And as we dig into that, we'll realize that once we have something deployed to production, it's now time to do a little searching and researching, do a little data science to take it to the next level. This is where we believe the data scientist of the future is really going to have a lot of important work to do. Once we get those applications into production, then it's time to optimize them. Then it's time to get data centric and look at the data science. We're going to focus first on AI engineering and what lies at the center of these infrastructure tools like Langchain. What they're doing fundamentally is they're helping us to combine LLM applications with other sources of computation and knowledge. We've seen Langchain sort of emerge over the past six months, since their series A over the past eight months, and really start to dial in exactly what they're all about. We saw just a few months back, they're talking about being data aware, connecting to other sources of data and knowledge, as we've heard, and moving more towards these agentic reasoning applications as well. More recently, we see that they move from the sort of data-aware language to this context-aware language. In the age of RAG, this makes a lot of sense. And then simplifying to sort of talking, talk about reasoning, machines, and applications, this is another core tenant of what langchain is all about today interestingly if you joined us last week we walked through this diagram to teach you about lang serve today we've seen some updates just in the last week we've seen some key updates to this diagram including the playground including talking about now this lang chain core in lcel the lang chain expression language and also looking at the community piece which we assume is going to be the open source pieces that remain free and open in lang chain we also see this nice cognitive reasoning cognitive architectures sort of templates and lang chain. We're interested to see how that continues to evolve. So hopefully you'll get a feel for the big picture of this today. And we've got lots of resources to help you dig into the details. So we're gonna move quickly through each of the core component pieces to make sure we can get to Langsmith today. LCEL is the core language that allows us to build and deploy both in LangServe and in LangSmith. It really connects well to the tracing and monitoring functionality that can help us with debugging in terms of LangSmith. That's going to be our key focus for today. And when we talk about models and prompt templates and really engaging in a chat style way with a chat style model, we want to make sure that we understand that we're setting up these templates to leverage that system, user, and assistant message schema that is sort of one-to-one when we go to system human and ai message in langchain we've talked about this a lot before we're going to keep it simple today using openai and langchain and it should be pretty straightforward exactly what we're doing as we set up our models and chat templates of course we're doing a rag system, so we need to make sure that we're building in each of these component pieces along the way. We're not going to really deep dive each of these pieces that we have as we have in the past, but we are going to remind you that RAG is all about finding references, adding those references to the prompt, improving your generations based on the references that you find. So when we ask a question, we go and we search a vector database. We set up our template so that we can take the stuff we find in the vector database that's similar to our query, stuff it into the context of the prompt, thus improving our generations. This is the basics of retrieval, and we will be setting this up as we get into our baseline. Of course, we need to chain things together. That's what laying chain is all about. And this is sort of the layer of abstraction that comes after we get the nuts and bolts and the core components really set up and ready to rock and roll. The chain is kind of the thing. It's just, it's nothing crazy. It's just allowing us to connect pieces to other pieces, models to retrieval systems, et cetera. And so this Lang chain expression language, this really elegant piece down here, this is what we're going to be using. This is what's going to allow us to integrate very nicely into Langsmith. And of course, we're going to be chaining things together along the way. What we're going to build today is we're going to actually go and find some rich content from Langchain's blog. We're going to build a rag flow specifically out of Langchain's blog. We're going to search Langchain for the top resources for any given question. We're going to be able to ask specific questions, and we're going to be able to get really nice generations and fact-checkable answers with references. We're going to keep it simple today in terms of models, and it's going to become clear why. But OpenAI is a great starting point for learning. And since we're learning something awfully complex, let's not add additional complexity today. We've also opted to leverage OpenAI's GPT 3.5 Turbo as our chat model today. Excuse me. And that's for good reason that we'll talk about in just one second as we get into our rag system we want to note we are using the the langchain blog and we are leveraging facebook ai similarity search to build our vector store out that's an open source tool and finally to get this baseline set up we need to talk a little bit about evaluation. This evaluation process, this sort of feedback loop, allows us to then not just glue this whole rag system together, but start to really ask the question, how can we improve this? Should we change our chunking strategy? Should we try different retrieval methods? Should we add some fine tuning? Should we try different retrieval methods? Should we add some fine tuning? Should we augment our data in some way? We can start to ask really poignant questions to improve specific aspects of the performance. And as we talk about evaluation, it's important to understand that in a RAG system, we have the question we ask, we have the predicted answer, the result here in this slide, we have the context, the returned we ask. We have the predicted answer, the result here in this slide. We have the context, the returned references, and we have the ground truth called the answer in this slide. Generally, that ground truth is best created by humans, but so often these days, we're doing the ground truth generations with the most powerful model we can find out there. Shout out to GPT-4, GPT-4 Turbo today. And since we're using GPT-4 as the ground truth generator, we're going to go ahead and leverage that GPT-3.5 for our predicted answers or results. In Langchain, we're going to leverage a evaluation metric called chain of thought QA, and it's very simple in the way that it works. The prompt template is very instructive here. You are an expert professor specialized in grading students' answers to questions. You are grading the following question, query. Here's the real answer, answer, ground truth. You're grading the following predicted answer, result. This is the prediction. Respond with correct or incorrect. We can use the LLM in this way to evaluate things automatically. Yes, evaluation is always going to come at some sort of cost. And we are going to take this to the next level in terms of the chain of thought QA, where we're going to not just instruct that you are an expert professor, but further instruct to use chain of thought reasoning. All right. Now we're ready to baseline everything. We're ready to see it rock and roll in Langsmith. We've got the Langchain blog. We've got the RAG system. We're going to see it rock and roll in lang smith we've got the lang chain blog we've got the rag system we're going to do it with face and open ai and all this comes together in our first demo i'm going to pass this one off to chris the wizard to show us how to get this up and running baseline so we can take it to the next level soon. Chris, over to you, man. Hey, Greg. Thanks so much. Yes. So I will go ahead and walk you through. First, I'm going to walk through kind of the notebook that we have set up here. And then I'm going to walk through actual the Langsmith interface. So the idea is that we have this idea of, you know, two separate entities that are working here. One is our co-lab, which is going to represent our application. This is what could be running in production or anywhere that it could be running. And then we have, of course, our Langsmith backend, kind of, you can think of it, which is where we're going to send traces to. And that's what's going to be what powers our tool. So first, we're going to grab our dependencies. We're going to use our OpenAI API key to generate an API key. And this is the idea here. That's, you know, we're going to use OpenAI for basically everything today. So we're going to set it up there. Next, we have to set up a basic rag chain. We're going to use a GBT 3.5 turbo, as Greg described earlier, for the reason of we're going to need to ensure that we are able to mark these responses or evaluate them correctly or responsibly. And so we're going to use GPT-3.5 Turbo as our generator for our application and the GPT-4 as our evaluator. You'll notice that we've got this tags thing here. This tags thing is pretty interesting because it is what's going to let us kind of, you know, set what resources we're using in LangSmith so that we can have better line of sight to what, you know, particular resources being used when, and we have a way to reference that for when things change, right? So these tags are very powerful part of the application. And we'll have the notebook link for you very shortly. And I'll drop it in the chat when we do. So the async bug handling, this is just necessary. We have to do this in order to do this in Colab. So this is like a processing step. Then we have a sitemap loader, which we're going to use to load the Langchain blogs. So Langchain's blog very, you know, happily has a sitemap loader or a very robust sitemap. We're going to use that to ensure that we get all of these blogs. So there's 155 total blogs. So we're going to load those up and then we can see that we have these sources to point us back to the original blogs, which is dope. We're going to use a recursive character text splitter. So we're just going to use a very naive splitting strategy here. There's no particular rhyme or reason for it. We know that these blogs are going to be relatively parsable text. So we're just going to use a recursive character text splitter. relatively parsable text. So we're just going to use a recursive character text splitter. This is one of the things that we could modify or play with and then determine the results through Langsmith with, which is pretty awesome. We wind up with a total of 1100 documents, which is not so bad. We now need to embed all those documents and store them into a vector database. So we're going to go ahead and do that using the OpenAI Embeddings tool. We're gonna be using TextEmbeddingAta02 in order to do this properly. We're gonna use a face vector store. So face is like Greg said, just meta. It's a great, quick, lightweight vector store. We're gonna go ahead and load up all of our documents into there and embed them. And then we're going to set it and load up all of our documents into there and embed them. And then we're going to set it up as a retriever. So now we have a retriever. We have a generator. We need to create a prompt. So we're going to create our base rag prompt here. The basic idea here is that we just want to say, using the provided context, please answer these questions. If you don't know the answer, say you don't know. Then we're going to pass in context and a question. And then we can set up our LCEL chain finally. So we're going to use LCEL because it's well integrated with the rest of the Langsmith and Langchain ecosystem. It has a number of production benefits, but it also has a number of benefits when used with Langsmith. And so that's what we're going to do today. Our base rag chain is our... I've added some annotations here in the notebook so you can look through it. But the idea is that we're going to retrieve our question from our input. So if you'll notice, we're calling this with this dictionary question. So we're going to grab our question question. So we're gonna grab our question and that's gonna become our question for the next step. And then we're gonna grab some context, which we're gonna get by passing our question into our base retriever. So you can see that we now have this dictionary object, which is gonna have keys, context and question. Context is gonna be filled with context from our base retriever based on our question. And then question is just going to be that question pulled forward. We're going to do this next little piece of the puzzle. This is just so we can pass that context forward so we can retrieve the context through the output. So we want to be able to look at our retrieved context from our application. Now, in our last step, straightforwardly, we pass that, you know, context in question into our base rag prompt, which is going to format the prompt here based on the context in question. And then we're going to pass that to our LLM. And then we're going to pass that into our string output parser and then we're going to retrieve our context going forward and that's it so let's test this out we can ask a question what is a good way to evaluate agents we get a response based on the blog and then we also get a bunch of context which is going to be a list of documents the idea here is that we're going to have both a response and some context which is what we need in idea here is that we're going to have both a response and some context, which is what we need in order to ensure that we're well grounded, right? So we're grounding our responses with that context. So we want to retrieve the context. So now we have that set up, it's time to finally get into Langsmith, right? We've got our chain. Our chain's working great. We're loving our chain. And then we're going to use our Langsmith environment variables to start this process. This is all you need to do. If all you want is simple monitoring, you could stop right here, which is the power of Langsmith. So we're just going to set Langchain tracing v2 to true. We're going to name our project. We're going to point to the Lang Smith endpoint. And that's all we have left is to actually get our Lang chain API key, which we're going to get from the user. You can put this into an environment variable, set it as your secrets, whatever service you're using. That's all we have to do. With that, we can run this particular chain, and we can notice that in this chain, we've given it the tag demo run, right? So let's go into Langsmith and see what this looks like. So you'll notice that in Langsmith, we have a bunch of different, I'll zoom in here so you can see it a little bit better. We have a bunch of different runs, right? We have a bunch of different, you know, expansions that we can do. We can click on them, but we want to look at that specific demo run that we tagged. And the way that we do that is we can go to our tags and we can check demo run. And then we can view the actual run that we just did in the tracing tool. So you'll notice that this tracing tool is, I mean, it's pretty powerful, honestly. It is an incredibly robust trace. We can expand this all the way out and you see all the steps involved in using Langchain. You can check to see which steps took the most time or the least time by clicking on this run stats button. We can see how many tokens we used. When we hover it, we can see how many propped versus completion tokens we used. You know, we can see the total time for this chain to run. This is the power of Langsmith, right? We're getting all of this kind of for free, right? We are, we're getting this all just from setting up those environment variables and then running our actual command. That's it, right? So we have access to this whole trace out. We can see every step, what every step did. We can see our retriever steps. We can see our prompt steps, our parser steps. Everything is available to us, which is absolutely incredible. So now we're able to see kind of some of the benefits, right, of Langsmith already. You'll also notice that we have the ability to see our latencies. So this is giving us an idea of how long it's taking for our users to run their chains. We can also see if we look at this, you know, we have every call that's made we can look at just the llm calls that are made if we wanted to we can see a monitor of kind of you know the different call counts that we have we're just doing this over one day so we're not seeing a huge uh difference but if we look over one hour we can see you know we have these LLM calls happening, our success rate, so we can monitor health. We have our average latencies, P50, P95. We have so much information, and all we did, right, to get this is set up these environment variables, okay? But Langsmith can do so much more, so we're going to look at how much more in a second. But for right now, we're just going to look at how we would run a simple evaluation, right? Greg said we got to baseline this. So let's baseline it. The first thing we're going to do is create a mock data set. Now, you can create this data set however you want it. You can create it with a human expert, with AI experts. It's totally up to you. It's entirely in your hands what you're going to be able to use this for. So in this case, we're just going to go ahead and we're going to produce a number of questions manually. And then we are going to set up our Langchain client. We're going to set up our dataset name. We're going to create the dataset in Langchain. And then we're going to create the data set in Langchain. And then we're going to add examples. So we create an empty data set. And then we're going to add individually examples based on these questions. What we're providing for the answer in this case is actually just the context that our retriever pipeline retrieved. So there's a number of different ways you could provide an answer. You could have AI answer the question. You could have GBT4 do it. You could have anything you want. In this case, for the chain of thought example, we're going to go ahead and provide these contexts. And those contexts are going to be what lets us have the ability to do our evaluation. Is this a correct response or not? So let's look at what that data set looks like right here. You can see that we have our data set. We can see our examples. We can click on each of our examples where we have our inputs and our outputs. And we can see all the linked runs associated with those examples. You can also see that we have the ability to run tests. And we'll see tests right in this window. And we'll have access to exactly how these tests shook out in the code or in the evaluation. This is a very powerful tool. Extremely good at what it does. And what we're going to do is we're going to look at the first baseline that we got, which is going to be this contextual accuracy of 0.83, helpfulness of 0.67, and a litness of 0.83. We're going to come back to that metrics in a second. But for right now, this is the idea. We're doing pretty good, but we could be doing better. So we'll kick it back to Greg, who's going to introduce to us how we might do it better. And then we'll see how we do it in the code. Yes, Chris, there's so much in Langsmith for us to try to get a handle on that. It's really kind of crazy. And, you know, trying to really wrap all this together in one it really truly is an end-to-end production platform as you guys can see you know just getting rag up and running and over to langsmith and the ability to trace it just kind of gets us started and this is really the point of productionG applications and production LLM applications is that once we get them into an environment where we can actually do some improvement, it's time to really do some data science. It's time to do some LLM ops. It's time to really try to fine tune this thing in many different ways to really dial it in for our application. You know, we've seen that this new age of the AI engineer is kind of upon us, where it doesn't take long. It doesn't take five months. It doesn't take five weeks. It doesn't even take five days to prototype anymore. It takes just a few hours to get a prototype up and running and into production. But then what happens once you get there? Well, that's when things start to get really interesting. And that's really where the edge of all this stuff is today. For instance, once it's there, what are we evaluating and why? We just saw Chris outline correctness and harmfulness and custom metrics like litness. We can literally, sky's the limit with eval for us. And that's not even talking about really focusing in on super specific retrieval metrics and trying to enhance our baseline retrieval and therefore get better generations. Of course, we can do fine tuning. If we have specialized language, we might want to fine tune embedding models. We might want to continue to fine tune down to the specific task our users are using our application for with the chat model that we pick up off the shelf. So it's really interesting to sort of talk about going from basic to advanced RAG. There's lots of ways to do this. Of course, the black art of chunking is always there. There's various ways to look at different indices that you can create and have different metadata filters and reasoning machines, sort of agentic thinking on top of all of those. And really, there's so many interesting things that we can do. We're going to keep it super simple today. In the interest of time and introduction, we're just going to do a simple re-ranking on our retrieval system, and we're going to see how the metrics can actually go up. So if you are using Langchain, picking up an ensemble retriever is something you might want to go ahead and just try off the bat. This is the idea of really trying to take a bunch of references and then order rank them appropriately. So you can think of sort of a hundred references. How do I just take the top five references? What does it mean to be the best reference exactly? Well, it's something like not a ton of redundant information, it's something like directly correlates to the ground truth answer. And there's many different ways to think about evaluation today, but're going to go ahead and pick up the hybrid retrieval ensemble approach to just show you one quick example of ways to enhance rag that's going to allow us to take a closer look also at and open up some time for us to take a closer look at annotation and feedback. You kind of got some of that from Chris's demo already, but here's the big idea, right? Is in order to align these things with not just humans, but specifically our users, our customers, people who are using our product, that's who we really want to align these things with. And feedback can be as simple as like thumbs up, thumbs down, RLHF status, but it can also be much more complex. You imagine that the data scientist of the future almost has to be a UX person as well, almost has to really be engaging the user for those additional pieces of feedback, especially early in the product development lifecycle, where you're really getting this dialed in with early adopters and visionaries. You as the developer are going to do this on your own in Langsmith at first. And this is where some of these feedback tools and annotation tools come in. Because although you can sort of get that thumbs up, thumbs down, oftentimes you want to just sort of make notes that you can come back to later and you can think about as you continue to curate data sets for the future. So for instance, if you're seeing something that isn't quite right, doesn't quite align, you may want to go back and you may want to modify that data set, modify that vector database full of your information in some way to really enhance the retrieval process. And yes, probably improve some metrics, but qualitatively make the output better for your users. Because remember, as we build LM applications, we want to prompt engineer as far as we can. We want to get it into deployment as quick as possible. Often we're moving from prompt engineering to question answering, AKA RAG to fine tuning. moving from prompt engineering to question answering, AKA RAG, to fine tuning. And as we're sort of providing annotations and feedback on the way people are interacting with our application, we can start to imagine ways to improve our prompting. We can improve our instructions. Whether it's zero shot, we can give additional examples of specific things that might show up in the future. One shot, two shot, few shot. And of course, as we get more and more and more and more and more examples that we're collecting along the way, we might realize that actually the way people are using this isn't in sort of a generic LLM way. It's solving one super specific task that we didn't really see from the outset. And so as we move along this sort of dimension of more and more examples, we ultimately get back to fine tuning. And so we can control for uncertainty and optimize that user experience through prompt engineering. We can use metrics like harmfulness and helpfulness directly built in to Lang Smith now. And of course, we can use that reinforcement learning type of approach directly as developers or working with our users to make those guardrails even better. As we think about the model behavior, we want it to be more and more constrained over time more and more dialed in over time more and more useful for that specific thing we're doing with our users and we want it to be grounded in the right data this annotation and feedback is going to allow us to do both it's going to allow us to optimize in both dimensions both towards optimizing the llm and the way the model acts, as well as towards optimizing the context and the way we're grounding the model with our retrieved information. Because it is a meandering road to optimize these specific types of applications, right? We'll prompt, we'll use one shot, two shot, few shot. We might do some rag, we might do some fine tuning. From there, what do we do? Maybe we pick up an ensemble retriever. Maybe we start annotating and noticing real specific patterns. How can we get to that human level performance so that we can feel comfortable deploying this at scale? We're going to have to potentially work with our users. The tools in Lang Smith allow us to streamline this process more than ever. And of course, ultimately, if you're seeing the same thing over and over, maybe you want to continue to annotate, provide a little feedback on that. Hey, this is the same or similar prompt semantically to something I've seen in the past. Start collecting those, start caching those prompts, start saving money as this thing rolls out to more and more users, more and more inference, more and more cache if you don't cache. And then as we get to fine tuning, we're talking about cheaper models, running inference on more quick, easy to deal with models. And that's just going to be better for business overall. So it's all about improving the system once we're in production. And to sort of show exactly how this works, we're going to go back to Chris for this improving RAG demo, where he's going to do a deeper dive on some of the things we've already talked about today and show exactly how you can take your RAG system to the next level chris back to you man heck yeah thank you greg okay so we're gonna touch on a couple things here first of all we're gonna touch on re-ranking so what is re-ranking how do we do it why do we care well re-ranking is basically a way to cast a wide net and then, you know, you know, pare it down. So the idea is that with reranking, we're going to be able to, you know, take a large set of potentially relevant documents and then trim it down to the most likely relevant documents. So we're going to use Cohheres re-ranking to do this. We're also going to test this kind of claim made by Cohere, improve surface performance with a single line of code. We're going to see if this is really true. And so we're going to first set up our base retriever. We're going to use the same retriever we did before. So this is our face vector store. The only thing we're going to change is that we're going to increase to 10 relevant documents. So instead of having the base three or two, we're just going to use 10, which is pretty big. And then we're going to use this re-ranking tool to re-rank those results into the most relevant results. So that's the idea here, right? We're going to spend more resources, computationally speaking, in order to do this, right? So in order to improve our results. Now, we're going to recreate our chain using exactly the same chain we did before. The only thing we've changed is we're using the re-rank retriever. I removed the annotations. So if it's easier for you to understand or interpret, that should be helpful. And the idea is that we have this tag where we're going to include a tag on this chain called cohere re-rank so we can keep it straight in our heads and in the code and in Langsmith. And now we're going to do this, you know, this kind of more robust evaluation set. So you'll notice we're still doing this chain of thought QA to check the veracity of our answers. We're still doing this harmfulness detection, but we've also added these few different labeled criterias. The first one is helpfulness, which is going to determine if it's helpfulness, take into account the correct reference answer. Then we're going to see if it is lit. Is this submission lit, dope, or cool? You know, this is obviously like a playful example, but the idea is that you can create these custom metrics. You can create them in even more robust fashion if you wanted to. The idea is you can create your own custom metrics. You can create them in even more robust fashion if you wanted to. The idea is you can create your own custom metrics that help you understand things about the responses in a way that might be more aligned with how a human might think about if a response is good or bad. Again, this is a playful kind of thing, but it is important. You'll notice that in all of these, I have this prediction key response, right? So we see this in literally every one of these. This is because my chain will return two distinct objects. My chain will return both a response and a context. response and a context. So when we want to have our criteria, we want to compare that to whatever base answers we have. We need to key it to a particular response. And so that's what we're going to use a response for. We can get very fancy with this and use it to compare our contexts and all kinds of other things with custom evaluation metrics. But for now, the idea is we have to address our actual response. But the way to do that is fairly straightforward, which is dope. We're going to also have our three additional metrics. These are out of the box from Langchain, which is conciseness, coherence, and relevance. These are going to help us understand how concise, coherent, and relevant our responses are to the particular user's query. And now we can run the eval. So first of all, we have our kind of base chain in which we have this ability to understand, you know, we have a decent accuracy, no harmfulness, which is great. We have some okay helpfulness, some, you know, some middling conciseness and litness and all this other stuff. Okay, that's nice. That's great. And then how does the re-ranker perform in the same thing, right? So we have the same set of results and we find that our, for the most part, are, they're well aligned. They're within the same kind of range. We get a little bit better on helpfulness for our re-ranker. And we can look at this in our data sets as well. And we can look at the actual responses. We can see here that our contextual accuracy is 0.83. Our coherence is very high. We have no harmfulness. We have some helpfulness. And litness is okay. We can actually compare this to runs that we did with the other tool, with the other chain. So we can do a side by side comparison of them in order to determine the quality or reasonability of the response. So we see here for the most part that these are well aligned, but our chain in terms of the re-ranker is actually quite a bit more lit. And you'll notice that it's uh a much more lit chain you know with very little uh improvements made to the chain thanks to coherence re-ranker so their claim makes search better uh you know with a single line of code if you're talking about litness i mean surely we've shown that right here but the idea is that we can look at these results and we can think about them in terms of, you know, what, what are we failing on? Now, one of the things that you might notice is that the latency is a little bit higher on our, uh, our actual chain with the, uh, re-ranker. And so we can look at our responses, right? We can kind of use our responses to determine how or why that happened. And we can look at the whole chain as a whole to see what happened here, right? Or where did this fall down? How did this, you know, how did this work out to be so, so much more high latency than some of the other responses, right? So if we look at this, like, 24 total thing here, we can see that we have, you know, this, this particular evaluation took quite a bit longer. Okay, that's dope. If we look at the, you know, these lower kind of instances, we can see that, you know, this was incredibly quick. And we have the ability to debug and trace through everything that's in our LankSmith application very straightforwardly, right? And this is what's going to be what lets us take this kind of to the next level. So say we wanted to address this particular run, right? So we look at the results. We say, okay, it's coherent. You know, it's not harmful. It took 6.5 seconds though. Maybe that's a bit long. Oh, this one took 10.74 seconds. Let's look at this one, right? Let's open this run. And we can see when we click on the show run stats, we can kind of see which points took the most amount of time. And we see that the generation here, right, took 7.24 seconds, which is quite long. So it's not our re-ranker. Our re-ranker is only taking, you know, 1.23 seconds. It's not so bad. It's definitely a little bit slower than it would be without the re-ranker, but it's not so bad. We can see that actually this generation took a long time. And now we can kind of look and see why. We see, oh, this is a very verbose kind of response. Okay. We can also see, you know, this is using our base LLM. So this is the tag we set up in our notebook earlier, if we look towards the top. So this is using our base LLM. Took a long time for the base LLM to get that back. So maybe we could look at, you know, some of the metadata that we have access to. That's great. Okay. What if we wanted, though, a better way to, quote, unquote, you know, deal with this? What if we wanted to be able to, you know, take this run and then learn something from it and improve it, potentially, right? And the way that we can do that is we can actually use our annotation queues to do that. Now, our annotation queues are going to be something we can use to metricize or understand our responses outside of, I'm just going to zoom out a little bit. The text might be a little bit less readable, I'm very sorry. But just to make this fit the screen better. But you can see here, like, we can use this annotate queue to determine a lot about our responses from, like, a human perspective, right? So not only can we look at the automated evaluations, but we can also use this annotation queue to have human annotators help us understand our data, right? So, labelers, data scientists, whoever is going to wind up doing this job, we have the ability to do a whole bunch of things. First of all, we can rate if this is a correct response, right? So, we can look at our actual response, which is our output. We can read it through, look at our query, and we can say, yes, that is correct, actually, right? And then we can say, oh, you know what? We're going to add some feedback. So we're going to add litness. We're going to say this response is about a 0.5 out of 1 litness score. And then we can add a note, right? Let's say we were looking through this and we wanted to look at the run. We can see like, dang, that took 10 seconds, right? And the LLM took how much of it? The LLM took 9.96 seconds on its own. So we could add a note saying LLM generation took, you know, about 10 seconds might be worth looking at. And then we can add all this feedback, and then we can mark this as done, and we can move on to the next step. So how do we add things to an annotation queue, right? Like, that seems very powerful. Well, the way we do it is we can create a new queue. Let's just call this one correctness, you know, check to verify result is correct. And then we can create that. And now we have the correctness annotation queue. If we go back to our project, what we can do now is we can select runs. And then we can add them to an annotation queue. And it's as easy as that, right? Now we go to our annotation queue, we can see we have this check to verify if the result is correct. And this is all available through Langsmith, right? We didn't have to do anything to get this functionality. It's just provided for us, which is an incredibly powerful tool. We're going to look at one more thing before I kick you guys back to Greg, and that's going to be the playground, right? So let's look at that. Let's go back to our kind of, you know, our beefy 10-second example, right? We're going to open this run, okay? We're going to go to our LLM, which we know took the most time because we were just looking at it. Now that we have our LLM open, I'm just going to zoom out so you can see the button a little bit clearer. We have this playground button, right? So what we can do is we can click the playground button, and this is going to get us a playground that is interacting with the open AI model that we have selected, right? So the idea is that we have the ability to now troubleshoot potentially why this took so long. You know, we can say things like this, please keep your responses short and concise. keep your responses short and concise. And then we can see if this is going to be, you know, thanks Grammarly. Grammarly is telling me to make an adjustment here so we can see this. Okay, this looks good. We can add additional messages like system messages, user messages, or more. Let's just see how this works. We would need to provide our OpenAI API key, which I will do off screen so you guys don't wind up seeing it in the video or the recording. But the idea is here that we're going to, and then we're going to hit start. We're now going to send that to the LLM and we're going to see we get a much shorter response, right? 2.47 seconds versus that last seven seconds. And so this is how you can leverage the tool of Langchain to really optimize and improve your LLM based applications. There's a number of different tools that we can use that help us give us visibility to what we're doing. And on top of it, give us access to troubleshooting and evaluating. We can also see that even though this is a new run, that run was not stored in our Langsmith project. It was created in its own playground project. Super helpful for keeping these two things separate. And with that, I'm going to go ahead and I'll kick it back to Greg. Yeah, Chris, man, that, there's so much that we can do in Langsmith. And, you know, it's great to see how we can dial in to reduce latency. We can specify an evaluation metric we want to really go after and hit it as hard as we can. And we can just use it for, it's almost like it's, it's almost too much that it can actually do. So super awesome to see all the ways that we can improve just a simple RAG system. And it's really starting to become clear. And hopefully to some of the audience today how improving RAG systems is really going to be part of hopefully to some of the audience today, how improving RAG systems is really gonna be part of this data science of the future as we go into 2024. What we saw today is we saw that LangServe is sort of leading the way in solving these end-to-end LM application in production issues that so many people are talking about, but there doesn't seem to be one tool to bring everything together. This streamlining of improvements, it spans the entire spectrum of prototyping, because once we're in production, we might want to improve prompt engineering, we might want to improve the specific aspects of RAG, whether it be retrieval or generation. And of course, we may want to eventually move towards fine-tuning our LLMR embedding model, but that baseline is so key. If the baseline is 10 seconds or if the baseline is 0.5 correctness, we want to make sure that we can move those numbers up or down depending on our targets. So interesting to see how LLMs act as evaluators through simple prompting, whether we're prompting them for correctness or a made-up metric, like how dope it is or how lit something is. This baseline evaluation is going to be highly dependent on your scenario and what you value in your company or for your application. So we're really looking forward to seeing what you guys build, ship, and share with Langsmith. It is still in beta. So as it comes out and as it gets rolled out, more and more people will start using it. Today, you can sign up to get access to it just like Chris has and start using it today to take your LM applications to the next level though. And with that, we're going to go ahead and go to Q&A. I'd love if you guys would go ahead and kind of let us know what questions you do have. It looks like we're light on questions today. And that's probably because we covered so much, Chris, in such a short period of time, bit of a tornado, bit of a whirlwind. But if you do have questions, we're here to answer them for the next 10 minutes or so. It looks like we've got one question in the chat so far. So go ahead and throw it into Slido or even throw it into the YouTube live chat today if you want to talk more about your specific issue. But Chris, what is your experience? What's our experience today at Makerspace with other languages? Like for instance, Spanish. We a spanish speaker asking about this question yeah so i don't have any specific line of sight to spanish uh i have some to uh so written chinese um with the open uh or the uh zero one ai ye models which are fully bilingual um when it comes to like interfacing with the tool, it is English, and it appears to be English only. I could be mistaken, and I'm very sorry if I missed some localization options, but right now it appears to be built in English. But it's going to be able to do the same things that it does now with the corresponding LLMs. So if you have a multilingual LLM, it's not like Langsmith won't work with it. But the interface is going to be for the most part in English. And then the actual quality of bilingual models or multilingual models right now leaves a lot to be desired, especially when it comes to this kind of meta stack that we're developing on top of, you know, our LLMs, which is like LLM as a judge, right? LLM as X. That doesn't necessarily translate as well to other languages. And so you wind up in a situation where you're translating a lot or you just don't have the functionality. So I would say that's kind of like a ecosystem issue, not necessarily a, it doesn't really have anything to do with Lang Smith or Lang chain, but definitely an ecosystem issue. Yeah. Yeah. Yeah. And I, I am noticing a lot more Chinese and Chinese English models coming out capabilities coming out it seems like that's a really big area of of development today that is important to keep an eye on I haven't seen sort of Spanish or or any of the languages spoken in India really emerge yet in the same way but definitely going to be interesting to keep an eye on that. So Cameron asks from the YouTube chat here, any advice on building a benchmark data set? Yep. Pay humans to do it. If you want the best one, golden data set, I think is like the accepted term that we're using these days. A golden data set is worth its relative weight in gold. You know, getting experts to annotate and develop really good data sets for evaluation and benchmarking is going to be your best bet. Second best bet is things like if the domain is compatible, you know, LLM or synthetically created data sets are pretty good. They're definitely much cheaper and much easier to produce. And so maybe that volume might counteract, you know, the expert cost. But if you want like the best data set, humans are still the go-to. Synthetic data, though, it's coming up fast. And I mean, there are papers that kind of read in this way that suggest it is as good, but the benchmarks are all over the place. Yeah, yeah. Humans still the GOAT. Okay, yeah, love it, love it. That's what we like to hear here at AI Maker Space, yeah. So I want to go ahead and go to Vinesh Iyer, who asks, how does this compare to like 1B and TrueAir's TrueLens? Are they along the same space trying to solve the same problem? Or is there something fundamentally different here? I mean, the only fundamental difference is the deep integration with Langchain for Langsmith. Otherwise, they are trying to solve similar problems, right? This idea of visibility monitoring and now moving towards the observability space, moving right this idea of visibility monitoring and now moving towards the observability space moving towards this idea of actionable intelligence or intelligence that you can automatically action so it is the case that they're solving similar things or similar problems in similar ways they do have their. If you're in the Langchain ecosystem, you have access to Langsmith. I would recommend using the tool. In terms of the rest of it, it's just going to be a preference thing and which features line up best for you. So Langsmith's evaluation is pretty clean and simple to use. 1B, Phoenix Arise kind of tooling, the evaluation is less out of the box, less canned. And so that might be a benefit for you. But you can integrate like Ragus with any of them or whatever Stanford renamed Ragus to when they did their recent paper, you can use that as well. But the idea is you're solving the same problem in pretty similar ways. Yeah, yeah. And there's a lot of monitoring platforms coming out all the time, right? I mean, it's like, everybody's got sort of some new flashier, similar way. I mean, fundamentally same problem, though. Yeah. So we got a question. I like this question a lot. Is it possible to evaluate, say, correctness or groundedness before issuing an answer to avoid unwanted hallucinations? When you ask the question possible, when it comes to LLMs, the answer is always probably or yes, in this case. I mean, you know, one of the metrics we looked at in Langsmith was latency, right? So the idea is that these requests take some amount of time, and the evaluation is going to take more time than the initial request, at least as much time as the initial request, likely more. And so the idea is, yes, you can do that. And perhaps even you should do that, but there is going to be a latency hit that you have to do that. There are platforms or tools like guardrails or like guidance that can help sit in that loop better. There are also API services that can do this. But yes, you could do like a harmfulness pass. What we'd really want to do in terms of correctness, groundedness, is perhaps build a classifier that was lightweight and robust that would sit in that loop for us in order to have the least amount of latency. There's also kind of an interesting push-pull where we want to stream responses, but if we're streaming a response, we don't have the full response to necessarily check for the complete harmfulness or correctness. There's a lot of moving parts here, but yes, it is possible, perhaps not completely feasible right now through these kinds of tools, but there are others that are looking to accomplish that exact task. Yeah. And besides, what do you even tell the user as you're as you're doing this or even as you say oh no i don't want to i don't want to output that you just have the i am thinking not you know beeping thing up there you know something like that i need a little time to think uh that would be hilarious so uh reuben reuben asks is this replacing the RAGUS library? What's the deal? Yeah. Kind of though. So RAGUS is a very good tool, but it's kind of like a collection of good ideas. Those ideas are not unique or new. And so Langsmith is going to help you use RAGUS-like evaluation. It does not, though, completely replace RAGUS if you mean specifically RAGUS metrics, the way that RAGUS has implemented them, right? It's going to be able to do the same things and achieve the same goals using their own implementation, but it is not literally going to call the RAGUS library tools. And so if you have an affinity for those or they work a particular way that you enjoy, this is not gonna replace RAGIS. It also works well with tools like RAGIS, right? We can define our own evaluation metrics, our own custom evaluation metrics. And we could just, you know, we could just hide, tuck away a little RAGIS assessment in there, um, in order to get those scores as well. So there's a number of ways that we can integrate and use them together. But for the most part, I would think of this as a replacement generally for ragas, plus a bunch of other stuff, even if it's not specifically replacing like every single API call for. Yes. Yeah. And like when I was looking at the eval metrics they're not exactly the way they're calculated and things they're not the same you know even if they're trying to measure the same thing so it's it's sort of like you know put as much eval on there as makes sense to try to understand the problem and uh but yeah as a baseline just measure something chris i love this question how much did this presentation cost i think about like uh 85 cents to a dollar okay okay that's pretty solid yeah and that and that to be clear that's with uh you know repeated reruns and that's with kind of running through the notebook a few times to make sure that it would work for you guys well, but it was still coming under a dollar. So pretty, pretty reasonable. Yeah, with that sweet, sweet eval. Yeah, with the eval. And I ran the eval probably about six or seven different times. Yeah, yeah. Okay, so last question that we have time for today. Can we set up thresholds and alerts, like for latency, for instance, to trigger Ops team actions? Okay. So the answer is you can build a system that will do this. As of right now, I do not believe that Langchain supports this directly out of the box, but you can through the Langs client API, you can build a system that would do this. So that would look for what's my P50 latency in the last three hours, and if it exceeds some threshold, run a job, right? So you can absolutely build a system that does this because they expose all these metrics through the API. I don't think, or sorry, through the client SDK, I don't think that there's any specific automated triggering that's built into Langsmith. At least I haven't found it or read about it in their docs. But you can absolutely build this yourself pretty straightforwardly. You just have to retrieve the metrics that you care about in the timeframe you care about. And then we're if else and from there. Nice. Nice. And that addresses Pano's API question as well. We're going to go ahead and collect any remaining questions and try to get some posts out next week on LinkedIn to answer them. So thanks everybody for your participation today. Thanks Chris, for joining us for Q&A, showing us how LangSmith is done. All right, everybody, that is pretty much a wrap for today. Thanks for joining us. This event has been brought to you by AI Makerspace. And of course, if you want to go deeper on LangChain, on LangSmith, we've got an upcoming cohort in 2023 for LLM Operations. This is going to be our fourth cohort, and we're really dialing in the way that this thing comes together in just four weeks to give you the broad picture of everything you need to build, ship, and share production LLM applications. From Langchain to Llama Index to WanB to Langsmith, and even more, we'd love for you to put your application in and potentially join us. If that's not a great fit for you, or you need a little bit more ramp up, you may want to consider our LLM engineering course that dives into the details of what happens inside an LLM, how it's trained, how we fine tune. That class is contained completely within the Google Colab notebook and doesn't require putting together a lot of different Lego blocks. That one launches January 9th. And if you're interested, that'll be cohort two and it's been going super well so far. If you have any questions, feel free to reach out to us anytime. Greg at AI Makerspace, Chris at AI Makerspace.io. And until next time, keep building, shipping and sharing, and we'll do the same. Thank you so much, everybody. See you soon.
LangSmith: Operating Production RAG Applications
3,696
AI Makerspace
20231221
GPT-4 Summary: Step into the world of Generative AI and master the art of evolving LLM applications from prototype to production with our dynamic event! Discover how to effectively baseline and evaluate your system's performance, manage key metrics like cost, latency, and token count, and implement continuous improvement strategies. Delve into the tactical application of these concepts using the LangSmith platform, exploring its comprehensive monitoring, evaluation, annotation, feedback, testing, and debugging features, all built on the robust LangChain framework. Whether you're looking to optimize your RAG systems, fine-tune LLMs, or ensure your application's scalability, security, and reliability, this event will provide you with the insights and techniques to enhance efficiency and performance in the fast-paced world of Generative AI. Join us to learn how to navigate the complexities of operating and improving your LLM applications in production! Event page: https://lu.ma/Langsmithllmapps Have a question for a speaker? Drop them here: https://app.sli.do/event/pchsPem1YK15a5CMfv6DYN Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for the LLM Ops Cohort on Maven today! https://maven.com/aimakerspace/llmops How'd we do? Share your feedback and suggestions for future events. https://forms.gle/N4L7KMmH2Qf2sMFn7
2024-06-13T22:10:01.113796
https://youtu.be/KuAn6Fy9UX4?si=UNfZbfkHjxPWYMJ0
Welcome back to session eight. This is on embedding, fine tuning, and we're going to go ahead and see how we can do this in a tool like Lama Index. Now, this is bringing a couple of things together. We want to align ourselves to everything that we're doing here. We're going to do a quick review, review Lama Index, and then we're doing here. We're going to do a quick review, review Llama Index, and then we're going to build a veterinary camelid index. We're going to build a llama index with Llama Index. All right, you guys ready for this? Then we're going to fine tune it because vets say some crazy words. Remember, why rag? Well, because LLMs lie to us, because we need to fact check them, because we need references, and we need to sprinkle some references into our prompts. But also remember that we talked about how important it is when you have very specialized domains with very specialized language that you may want to consider that as soon as you build up a RAG system or in the process of building up your first RAG system, you consider fine-tuning embedding models. That's what we're going to do here today. Because the language is so specialized, it's just not anything anybody would ever say to anyone and you would randomly find on the internet. So let's take a look here. During this RAG system, which of course looks like this, it combines dense vector retrieval and in-context learning. We're going to leverage Lama Index to build this thing. And we're also going to use Lama Index to fine-tune our embedding model. So recall, Lama Index is a data framework. It's all about that data. Lama Index uses nodes as NLP documents and PDF documents, aka documents as source documents. Those first-class citizens are just chunks of source docs. They're called nodes. The parsers allow us to take the docs and then chunk the docs. They're called nodes. The parsers allow us to take the docs and then chunk the docs and create those nodes. Okay. Query engines is what it's all about. This is the big idea with Lama Index. So what is a camelid, you might say? Well, camelids include camels, of course, llamas, alpacas, vicunas, guanacos. Hey, look at that. If you were wondering where any of those names came from, they came from the camelid family. And if you're wondering why we don't have any more camelids, well, there's none left in the picture. So maybe that has something to do with it. We've moved on to winds like Mistral and Zephyr. But for this, we're going to look and dig super deep on camelids. Shout out to Ohio State for having really in-depth vet info in their research library on camelids. Apparently, this is where you'll find the International Camelid Institute. And this kind of place, if you're doing work in a place like this, this is the kind of place where you might consider fine-tuning your LLM embeddings, because that's probably definitely going to help improve some of the retrieval and some of the generation. Because otherwise, if you just don't understand the words, you're just going to have a tough time. So building this camelid index, this llama index with llama index looks similar to other indexes that we've built with llama index. And if you consider the ways that you might go about improving retrieval, Lama Index is constantly building out these capabilities. But they're often talking about a lot of different ways that you might start to do more interesting and complicated things. And one of those ways is fine tuning of embeddings. In this particular case, because we have such specialized language, we're going to fine tune those embeddings. The way that we fine tune embeddings is we're going to, and if we're going to have another like joker in here, we're going to have to kick them out again. So bring it. Please mute if you are messing around. The ingredients for fine tuning embeddings are these question retrieved context pairs, right? So what we're going to do is we're actually going to create these question retrieved context pairs, and then we're going to take an existing embedding model, and we're going to train it, so to speak, on CAMELID research paper context. That's it. And we use sort of a very simple approach here using a built-in Hugging Face sentence transformers loss function. And it's really not that complicated. What we see when we do this is that our hit rate, our ability to find the appropriate context, given any particular query, actually improves. And so if you have very specialized languages, you might consider fine-tuning embeddings. And the way you do that, we're going to show you right now. right now. Chris, Camelid Embeddings. Oh yeah, let's get rocking with some Camelids. Okay, hopefully you can see my screen. The basic idea here is we are going to fine-tune our embeddings. So why would we want to fine-tune our embeddings? Well, as Greg said, you know, especially in these kind of veterinary papers, there's just so much language that we have, you know, no idea about or we don't know how it's related to other, you know, tokens in our in our corpus or, you know, it might have one meeting and kind of common parlance, but have a totally different uh in in the case of this specific application so the thing we need to do is we need to I'll link the collab sure one second sorry here we go thing we need to do is fine-tune those embeddings right so first of all get a bunch of dependencies second of all we're going to grab our OpenAI key. Then we're going to get the camel data from our data repository. We're going to go to high-performance RAG, and we're going to download camel papers test and camel papers train. You can see there's a bunch of crazy papers about camelids and and uh you know that's great what is the intuition between the behind the cue retrieved answer pair idea to fine-tune the beddings is the last binary type of last so the the way that they do it actually is they they make the assumption that every other context in the QA pair data set is a counter example to the found context or to like the selected context. And then I can't remember the specific loss function, but I'll bring it up and I'll show you guys. Now that we've got just a lot of papers about camels or camelids to be more precise uh we're going to go ahead and load those uh we're going to load those using our simple directory reader which reads directories our simple node parser and our metadata node our simple node parser is going to uh parse out all of our uh documents into nodes for us yeah yeah i'll bring up the loss uh function for sure once we have these two corpuses we're good to go now we're going to generate QA embedding pairs which we're going to do with everyone's favorite of course AI so we're going to use open AIs gpt 3.5 Turbo to generate QA embedding pairs. Then we're going to save those as a data set. And then we're going to do the same thing for our validation training set. So now we have our validation and we have our train. Everything's good so far. Next, we're going to use the sentence transformers implementation to get BGE small 1.5. It's just a good embeddings model. It's trained on a big corpus, and it performs very well on our task, which is the retrieval task. So that's why we're picking it. The embeddings leaderboards update very frequently, so you can use whichever one you want that performs the best at whatever tasks you need to do. Now we're going to use the sentence transformers fine-tune engine from Lama Index. Thanks, Lama Index. We pass in our training data set, the model we wish to fine-tune, the output that we wish to have our model be saved in, our validation data set, and then the number of epochs we're going to train for. Of course, we could train for more or less time. It's totally up to you. But the idea here is that we have, you know, the same kind of training process that we would for a normal model, but this is for a sentence transformers model. And it's to kind of drag, right? We have these embeddings. If we just imagine them in 3D space, right we we know they're kind of in this cloud and their proximity to each other or their direction from the from the the origin uh you know is it in a particular spot and we're just kind of dragging them around or moving them around re reclustering them in that space in order to align with our actual corpus of documents better. So that's a way you could visualize this if you were a person who liked to visualize things. Once we do all that preparation, we do everyone's favorite step. We call.fintune, and we see it go. And then we can grab our fine-tuned model out of our fine-tune engine now we can set it up as its own embedding model and what we're going to do now is evaluate that embedding model so we've created it and that's good but we need to evaluate it we're going to evaluate it with this. So there's a lot of metrics that you're going to get from this evaluation. We're only going to really care about the map at K. So this is the mean average precision at K. I believe it's map at five that it reports back to us. The idea here is we just want to make sure we're retrieving the right kinds of documents in the top five documents retrieved, right? So how often are we doing that, you know, is kind of what we're caring about here. So we want to, for the most part, always retrieve the correct document in the top five. Now, obviously, we're not going to get to perfect with two epochs of training on a very strange corpus, but we can see that with this evaluation, which is all done through wonderful abstractions, thanks to sentence transformers in this case, not Lama index, We can see that our base unfine-tuned embedding model receives a MAPIT 5 of 0.76, and our fine-tuned embedding model receives a MAPIT 5 of 0.79. So we do see that there is a real increase between the two. Again, this is two epochs on a very, very strange data set. Ideally, we train for longer in order to get this result even better. But it just goes to show you that even with the smallest amount of effort, we can improve these systems to be better at the tasks we need them to perform. One thing I do want to point out or mention, tasks we need them to perform one thing I do want to point out or mention when you're looking at your map at K scores it is important that you set your retrieval K to be the same value as you see uh in the in the metrics right if we have a very high map at K or map at five but we only retrieve three documents we're potentially shooting ourselves in the foot. So you want to make sure that you align the desired behavior of your retrieval pipeline with the metric that you're looking at. Just to point out the fact that, you know, you might not see, let's say we did RAGUS on this, but we kept it at only three retrieved documents, we might not see an improvement. And that's because we weren't actually looking at the right metric in order to make a decision about which is quote unquote better at that task. But with that, I will kick it on back to Greg. All right. So we saw those numbies going up. That was pretty cool. We didn't even train that long, but it did help. And that's pretty legit. You know, that's kind of the big idea. There it is. In a nutshell, fine-tuning embeddings. Lots of other things we could do for any given RAG system. All sorts of fun retrieval. All sorts of fun different node parser stuff in Lama Index to play with. All sorts of different evaluation things we could potentially do to instrument this thing and measure different numbers, see if they go up too. But that's a wrap for this little sesh. There are many ways to enhance retrieval and thus generation. Fine-tuning is one that you might want to pick up for very specialized domain language. And, you know, an example of that is the VET LLAMA index with LLAMA index. So as we wrap up session eight, the final session, session nine, that's not directly related to you guys presenting what you've got today, is going to be on deployment of your app. Once you've got all the logic, all the brains, all the everything in the RAG system, it's time to serve that thing up to users. And so we're going to see how to wrap everything in a chainlet front end, deploy it to Hugging Face, and make sure that you've got that killer demo that you can show live by the end of the day, night, or morning, depending on where you are in the world, as we start to make it into the final hours of the first annual Chativersary Hackathon.
Session 8: Fine-Tuning Embedding Models for RAG Systems
946
AI Makerspace
20231204
What you'll learn this session: - How to tune open-source embedding models to align with specialized language, like that used for research Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Apply for one of our AI Engineering Courses today! https://www.aimakerspace.io/cohorts
2024-06-13T22:11:54.074564
https://youtube.com/live/NdF609kO8FY?feature=share
Yo Wiz, you heard about Langraph yet, my man? I have, yeah. I've heard something about it. It seems like it's all about cycles and chains, cycles and chains. What's the one sentence description of why we should care? description of why we should care? Well, it lets us build agentic workflows in a way that's agent forward, or agent focused. Agent forward, agent focused. Okay, okay. Okay. So, but then there's also open GPTs that came out super recently, too. And that's related to this Landgraf thing. Yeah, that's right. They leverage and build on top of Landgraf to help build these open gpts okay so it sounds like we're going even another layer more meta today then that's right we're going super meta dude awesome i love every time we abstract and I can't wait to get into this today. I'm gonna take the reins from you for now and introduce everybody to agent forward and agent focused. And this'll help us really understand how open GPTs are powered and how we can build with them super easily at the next layer of abstraction. Hi, everyone. I'm Dr. Greg. That's The Wiz. We're co-founders of AI Makerspace. And thanks for taking the time to join us for today's event. What you'll learn from today's session is exactly how Landgraf was sort of conceptualized, was sort of conceptualized, the problem it aims to solve, as well as how it sort of enables this agent-first, agent-forward, agent-focused application building. I'll cover the concepts, Wiz will be back for the code, and you know the drill. All right, let's get into it. Quick shout out though, Langchain just did some pretty sweet rebranding. So check out this awesome new logo with the old AI chained together. Shout out to Langchain, crushing it on the subtle details there. So let's align our aim for the day. We already mentioned it, but we're going to cover Landgraf. We're also going to understand through our coverage of Landgraf how it actually powers OpenGPTs through MessageGraph. And a lot of the stuff we're going to talk about with Landgraf isn't necessary to understand what MessageGraph is doing, especially if you've built anything with the OpenAI API or open AI assistance. However, it is useful to get the context. And this is what I'm here to provide today. The open GPTs are really sort of a low code solution that you can pick up off the shelf for free. And you can build chatbots, rag applications, as well as more complex assistance. Chris will show how to build all three today. So we'll cover Landgraf, then we'll cover OpenGPTs, and we'll build our own. All right. So Landgraf. Landgraf is all about adding cycles to applications built on Langchain. Okay, what does this mean? Right, what does this mean? Well, let's talk about cycles and chains. Previously, Langchain, before Langraph, sort of lacked this way to easily introduce cycles into chains. When we think of a cycle, we just want to think of sort of a more complex, more dynamic for loop, really. That's kind of what we're thinking about here. And so if you're familiar with agents and line chain already, for example, the agent executor, For example, the agent executor, that capability that's built in to the framework, that's actually been updated to be more agent first, agent forward, more cyclic in nature. Okay. And we'll kind of see that by the end. Because realistically, right, a chain is just a very, very simple one to the next to the next. Whereas if we're dealing with more of a cycle, then we want to be able to do more complex things. Now, you can do a lot with a chain. For instance, a lot of the most popular GPT is out there are just built on simple chains that have been prompted with a great system message. Prompt engineering can get you a very, very far way. For example, in the Langchain blog, they shout out character AI as being a very, very powerful way to leverage simple chains. But of course, as we get into more complex things, cycles become more important. So we want to connect LLMs to other sources of context. We want to sort of be able to add any additional information that we can find into the prompt, in the context that we can learn from. We want to be able to loop through. We want to be able to get an output, decide on a tool, make a decision about exactly what we should be doing next, and continue until we get the best possible output. Now, we've shown some agent stuff before and a way that we can sort of think about this is we kind of have this tool belt and the agent is kind of the brain. And we've shown this legendary meme before. It's still very useful way to sort of think about it as we kind of have this LLM brain pulling from different sort of think about it as we kind of have this LLM brain pulling from different sort of tooling. And this is kind of where we can start our journey today as we try to sort of fill in this space, this kind of wide space that exists between simple LLM calls and fully autonomous agents, because there's a number of distinct ways that we can engage with reasoning and with context between these two. Of course, in the extreme, when you're talking about biological things that actually exist in the real world, this term agents can mean something that sort of has a simple rule associated with it. And when you put many of them together, you can get this sort of emergence of this complex behavior. This is a quote from a book on complexity. Agents, for example, could be neurons, which form a brain, giving this sort of emergent structure at higher and higher levels. So this complexity that kind of emerges from these simple rules, from these sort of simple agentic behaviors is ultimately what we're aiming at. Now, another famous guy in the space that you've probably heard of, if you watch our channel, is Stephen Wolfram. very simple rules producing nothing less than extraordinary complicated behavior is something he's been studying his entire career. And it's really fascinating to kind of see how simple the rules can be and how much complexity can emerge, even within very, very constrained environments. And as we get into AI and agentic thinking, there's many different levels in which we could potentially see some sort of emergent behaviors. And so what are those levels? Well, again, going back to this idea that we have context and we have reasoning, this is where we're gonna have the most interesting LLM applications. And the different ways that we might combine context and reasoning are going to give way to the cognitive architectures. This is sort of a laying chain terminology that we might be able to leverage. OK, so if we sort of break this down, maybe there's five levels. So a single LLM call is the easiest, simplest level. Harrison covered this in a TED Talk that came out in December. I encourage everybody to give that a watch. It's only about eight or nine minutes, maybe 10 minutes or so. And beyond a single call, you can sort of string a few LLMs together, but still be focused on sort of one input, one output, kind of just running through the chain. Now, beyond that, it starts to get more interesting. For instance, using an LLM as a router, where it's going to choose which tool, which retriever, which prompt, you know, maybe even you have a list of prompts you might pick from, which one to use next. Now, if we want to sort of take the idea of just simple routing, right, you're sort of triaging as a 911 operator to this idea of a state machine, then here we can sort of say, you're not just using the LLM to route in some kind of loop, but you're also allowing for a state, a particular state to try to be reached. And this is going to be very, very important for enterprise because a lot of times they don't want fully autonomous AIs doing everything, but they might want to get an alert to the right human when a particular state is achieved from one of their applications now the fully autonomous agentic applications those are going to be fully human out of the loop and the framework that Langchain has sort of introduced here that I really like is kind of this idea that classic code right fully sort of static human operated code, we're deciding which outputs to, we're deciding what the output is of any given step. We're deciding which steps to then take based on that output was just taken. So this is sort of the classic if-else looping AI, right? And so now we're sort of introducing some additional reasoning into this that doesn't necessarily have to be hard-coded, making it a little bit more dynamic. So the simple LLM call, we're just sort of taking one step with the LLM and we're getting some output. When we have sort of a chain of LLMs, we're then sort of coming in and versus the human, we're just taking multiple steps with the large language model these are not very interesting it gets interesting when we start to get to routers and what a router is basically saying is it's saying hey after we get this llm output go over there and proceed to some next output. Go over there and do something and then stop. And that's very useful because you may want to go over there or go over there or go over there. Now, a state machine is sort of taking this to the next level and saying, go over there. And once you achieve this particular state X, then proceed. Okay. So here we're sort of adding an additional layer of complexity, an additional layer now of looping. This is sort of a place where we start to see the cyclic nature start to emerge, right? Before we're kind of doing things linearly, routing is sort of linear, but this idea of a state machine where we're sort of looking for a particular state, this is looping fundamentally. And of course, the fully going to be something that is kind of doing everything for the human. Now, this can be simply viewed by this diagram we've seen earlier, where the LLM is kind of the brain, right? The meme, remember, was simply the LLM being the brain of Batman. So the LLM is making all the decisions. And the easiest way to think about sort of the kind of meta super autonomous level is with this sort of two-step agent loop. And this is the kind of reasoning that is going to be baked into pretty much all the agentic tools you pick up. These steps are going to be just repeated over and over until a final response and output is generated. You're going to call the LLM. You're gonna determine what actions to take. Determine means sort of reason, or what response to give the user. So an action is gonna be associated with a particular tool. Generally, that tool is also gonna be associated with retrieving some information. And then we're going to take an action, retrieve information, send a call to a particular tool, get a response, do some additional reasoning, call the LLM to do that reasoning, and here we are back at step 1. We loop until we get the final output as discussed before. Now, this is the same loop that powers the core agent executor in Langchain. It's also the same loop that it's the same type of logic that we see in projects like AutoGPT. So this sort of agent logic is kind of fundamental. And so we want to think about it in this loop. So this sort of agent logic is kind of fundamental. And so we want to think about it in this loop. We want to think that given a user question, we're going to enter the loop. making an observation, taking action, making an observation, taking action, until we sort of are finished looping through all of the reasoning, action, steps, and we're like, okay, that's enough. I think I have the final answer. Ready to go to the output. The action and the corresponding observation here are each time added back into the prompt of the llm we call this in langchain the agent scratch pad okay and then the loop sort of resets with a different input that's made up of now the input plus the previous output and this sort of forms the basis for how Langraph is going to play with the underlying mechanisms that make OpenGPTs work. Okay. This is not new. These ideas are not new, but they are starting to mature. For instance, the modular reasoning, knowledge, and language system from AI, from AI-21 labs, the Markle aka Miracle paper came out May 2022. And this sort of agent cognitive architecture has been evolving ever since. We saw then the React paper, the reasoning action framework paper come out October 2022. And Langchain has since built in the zero-shot React agents. This is sort of the most general purpose tool where your general purpose agent, where you're selecting tools based on the description that you've given each tool. And you're just asking the LM to reason through based on what it sees from the given current output, which tool it should select next. And so what's currently implemented has kind of been expanded from that particular paper. And then of course, interestingly, auto GPT all about agents and their sort of highest level quote they raised $12 million with in November was to transform AutoGPT into the most significant open source project ever and unlock a new era of work for everyone. Sounds pretty epic. So that helped land them $12 million as well as being a ridiculously huge, popular open source project on agents. But again, Miracle, React, AutoGPT, these are sort of talking about this fully autonomous agent. And the thing that's interesting is that more control than these fully autonomous agent is, is often what's required. Because in practice, when we want to put agents into production, we don't want them to go off and do something crazy. For instance, we might want to hard code some of this stuff in. We might want to force an agent to call one tool always first. We might want to have control over the possible sequences in which tools can be called in general. Of course, we, as I discussed before, might want to have many different prompts for a single tool, and that's going to depend on the factors in our current loop. So when we talk about these more controlled flows, flows this is where the terminology that langchain uses for state machine is important because remember this is sort of that level between router and agent and so this is where we want to sort of go and look for a state before proceeding. All right, so this is this is sort of a way to integrate more control than simply the fully autonomous agent. Okay. All right. So what do we have then? Well, we have this sort of idea of Lang graph where we're simply creating cyclical graphs. simply creating cyclical graphs. Okay. What's another way to say that? Well, Landgraf is a way to create these state machines by specifying them as graphs. Okay. you tracking. So here's the first sort of structural component of Landgraf. Landgraf isn't that complicated, really at all. And what forms the basis of Landgraf is this state graph. It's the class that represents the graph. We can initialize it by passing in a definition of the state. And this definition is the object that we're updating over time, a.k.a. cycles, a.k.a. looping, right? So like at its core, Lang graph is really not all that complex. Now, of course, along with any graph representation, we're going to have nodes and edges. We can add nodes of various types. For instance, an end node is very important because we need to end our cycles at some point. important because we need to end our cycles at some point. And then we can also add edges, so starting edges within nodes. We can add normal edges, where these are sort of always going to be called. And then conditional edges, of course, are going to be a really key factor here because it's going to kind of depend. Well, which one are we going to next? So the interesting thing that's happening here is that with the langchain v0.1 that came out, they actually recreated the entire agent executor using this Lang graph setup. Because while we can still use it for the existing agent setup, the internals of how this thing operates in cycles is much more easily modified. And so what we're saying is we're saying that that internal state is more cyclic in nature less linear as a chain and so the chat agent executor is what connects this all back to open gpts and to things that we've seen in the past. The chat agent executor, of course, everybody's into having models tuned for chat, instruct tuned models also tuned for chat. And when we're doing a chat, we want to essentially use a list of messages, right? That's our state at any given time is the complete list of the conversation, right? All right. So this forms the basis for what we need to know so we can really grok open GPTs. Well, it turns out open GPTs not that hard. Open GPTs, they run on message graph, which is a particular type of graph in Lang graph and message graph uses message passing so check this out it takes in a list of messages and returns messages messages plural to append to the list this is cool because what happens is that at any given point, we just have a big list of messages, the conversation. All that can go back into the context window, right? It's more context for each step, just like chain of thought examples, right? Okay, so what's noteworthy about this besides the message passing? Well, first of all, message passing is sort of a common method for communication and distributed systems in general. Visualization of the work being done is much easier with this message graph and with Landgraf, of course. And this idea of a message list seems really, at least conceptually, extensible to multi-agent systems, right? Because each agent will just then append messages to the list of messages. Right? You know, getting pretty meta here. And it's also super related this message graph thing, by the way, to the chat completions model that we've seen many times from open AI. And then the sort of chat completions model structure is one that we start to see when we interact with a lot of different models, even open source ones on Hugging Face. And it's very, very related to the OpenAI Assistance API as well, which we've covered before and we'll go into real briefly just as an aside. But in the Assistance API, we're appending messages to threads, if you recall, if you've seen our agents and opening assistance API event before. The chat completions model is, of course, just the system user assistant setup, where we have a list of messages and they each have a role. And what we're gonna do is we're going to put in a list of messages on the front end of an LLM. We're gonna get out a single message and we're gonna then append that to the list of messages. So in list of messages, out single message, not many messages. So we're sort of extending this, right? And this is the similar thing to what OpenAI Assistance API did. They said, you know, an assistant is just something that maintains persistent threads and also calls tools. What's a thread? A thread is a conversation between an assistant and a user. So what we get is we get one system level prompt and then assistant user, assistant user, assistant user all the way down, right? So what we're doing is we're sort of storing this message history. And then notice, of course, if it gets too long for the context length, then we're making sure that we're doing something about that. So the assistance flow is to create an assistant, create a thread, add messages, run the assistant to trigger responses. That'll call the tools, that'll be added to the thread. Very similar things are happening in OpenGPT with laying graphs message graph that underlies it all. What we're able to build as a result is we're able to with Lang Graph's message graph that underlies it all. And so what we're able to build as a result is we're able to build assistants, rag bots and chat bots. Chat bots are prompt only. Rags are, of course, used when we want to combine our own data from some source. Now, rag bot on OpenGPT today means one retriever, and we're always going to use it. Assistants are more complex. You'll notice there's a number of different things we can use with assistants, and Chris will walk us through that here in just a moment. But the chatbot, again, it goes back to this very simple architecture. This is the one we're going to be kind of considering. We can mess with the prompt, but that's about it. The ragbot, we're going to use retrieval before we augment the prompt context and put it into the Lm and of course the assistant is going to allow us to be more autonomous and more agentic and it's going to allow us to be again more agent forward agent focused from the ground up architecturally the way the code has been written. We can put lots of tools in from archive to DuckDuckGo and many others that the Wiz will show us right now. Chris, over to you, man. Thank you, Greg. The basic idea here is we're just going to do a quick walkthrough of Langraph is we're just going to do a quick walkthrough of Landgraf to kind of, you know, figure out exactly what's going on here. And then we will move to looking at how those open GPTs will look and how they work. So we're going to start with the, you know, LCL plus cycles, very powerful framework. The idea is exactly as Greg said, right, is that we want to have this idea of, or we need to have this idea of looping, right? We need to have this idea that we can keep going through cycles. We don't have to, you know, just just go through one flow and then be done. Instead, we want to be able to go through one flow multiple times potential this and that, right? The idea basically that this comes from is, you know, loops are kind of naive, we, you know, they're hard to work with or extend or change. And this idea of creating cyclical graphs is a lot easier to understand, it's a lot more intuitive to work with. And we'll see that as we progress through the notebook. So again, why land graph, the big reason here is that, you know, we want that agent forward approach, we want the agent to be kind of at the center of this flow as opposed to kind of like an afterthought, right? When we do this looping strategy, what's really at the center of the experience there is that loop, right? And the agent can act in the loop and that's dope. But when we create these cyclical graphs that our agent can kind of navigate through, it's gonna lead to a more agent-forward solution. So we can think about the systems and create them more straightforwardly. We're going to grab a few dependencies to start today. Of course, we're going to need Langchain. Of course, we're going to need Langchain OpenAI and Langgraph. We're going to use two tools in this example, which is archive and DuckDuckGo search. These are two community tools that will leverage to do different kinds of search. And we'll see an example of how these are leveraged at the end of the notebook. Because we're going to be using open AI today, we need to provide our key and just kind of like tacked on, but not something that we should forget is the Langsmith integration, right. So not only do we have the kind of standard, you know, Langchain ecosystem changes with LCL and kind of async everywhere and all this stuff. We also have that easy Langsmith integration, which is going to help us add to that visibility in order to understand what our systems are doing, why and when. So that's great. The first thing we're gonna do when we're thinking about laying graph is we have to create some kind of tool belt, right? Likely what it is that we want our agent to do is make decisions about the text that receives and then apply certain tools in order to either answer questions better or have a better understanding of the user's query. And the way we create the tool belt should be fairly familiar. We have, you know, just a list of tools. There you go. Can't get more straightforward than that. And this idea is present in, you know, lang chain from before. So nothing new here. We're just going to pass on those tools and those tools are going to be what our agent or our lang graph will eventually leverage. Now, we do need some way to call those tools, right? It's great to have them, but we need some way to call them. And so we're going to use the tool executor to do that. And all we have to do here is use the prebuilt tool executor. here is use the pre-built tool executor. This is just a wrapper, basically. It's going to take a tool invoke class, and that's all we have to do. Next thing we're going to do, now that we have our tool belt, and we've kind of converted this to a format that makes sense, like our tool belt, we're going to set up our model. Now, you can choose any model here, right? It's Langchain. You can do whatever Langchain's got integrations with. You can choose, but you are going to set up our model now you can choose any model here right it's lang chain you can do whatever langchains got integrations with you can choose but you are going to see better results with quote unquote more powerful models or models that are better at reasoning right this is this idea of we want this kind of cognitive uh you know engine and that's going to be kind of our larger models that, that you can access through APIs. Another advantage of the API access is, of course, we can take advantage of the synchronous nature, you know, without worrying about scaling our own architecture. But, you know, that's a tangential benefit. The big idea here is we need something with enough brain power to reason well when it comes to these kinds of graphs. In the following example, we're going to be looking at something that does rely on function calling, which is not exclusive to OpenAI, but certainly is one of the benefits of OpenAI. If you want to use this exact example with a different model, you'll have to ensure that you're either making modifications to your tools to not use the function API, or function calling API, sorry, or you'll have to use a service that provides its own wrapper for that function calling API. Otherwise, we set up the model the same way we always would, right? And because we live in this world of langchain v0.1.x, right, we have the confidence that these are all runnables, right? And that runnables behave in a very specific way that's integrated very well with LCL. So we get some benefits to those, including that Langsmith integration, but also asynchronous capability. So you don't have to fret or worry about weird async errors coming up, which is great. Now, because we're using that function calling, we want to actually modify our tool belt a little bit, right? We want to modify it so that it's converted to an open AI function. All this is really doing is making sure that we have a smooth integration between the function calling API and our existing tool belt. You know, when you create a custom tool, which is a straightforward process, right? You can still use this helper function to convert to that kind of function calling tool. So you don't have to feel like you've got extra work to do. If you're using custom tools, this is still going to work for you, which is dope. Next up, we kind of want to get state involved, right? So Greg mentioned it many times. We're going to mention it many times all the time here. The core idea here is that we want to have potentially stateful applications. They don't have to be, right? But it is useful to have states so that we can understand the current situation of our application. I won't go into the big paragraphs here. You can read them as you go through the notebook. But the idea is that we have this stateful graph which is going to pass around this agent state. And that state is very similar to our scratch pad, but it's a bit more robust and more easily customizable. And all we're going to do is in this case, we're going to set up our agent state to be a dictionary, right, with the key messages and the value is just a sequence of messages. So all our state really is is an ever-growing list of messages that exist between the LLM and the user, right? You can see here just a quick example, right? When we start, we've got no messages. And then when we enter, we add one message. And then when it goes to a tool or it goes to a node, it's going to add some additional messages and on and on. This is all we're doing. And the way this is useful to us will become very clear in the next section here. And the way that we set this up is pretty straightforwardly. We're just going to base it off of type deck. And you'll notice here that we have this operator.add. That's because there's two different versions of state, right? We can have a set state where every time we overwrite our existing state or a parameter our state, and then we have add state, which is just kind of this ever-growing list that we're talking about. So now that we've got state, we've got our model, we've got some tools, we got to talk about graphs a little bit, right? We got your most basic graph here with some nodes and some edges. You know, this is a classic. All this is going to do is serve as a visual representation of what our application looks like. And one of the best parts of Landgraf is that they are easily composed into these visual representations to allow everyone to have clarity, you know, not just you as the engineer, but it makes it easy to describe to people how that system works in this kind of one-to-one flowchart method, which is super dope. When we're talking about nodes, we're talking about, you know, a function or an LCL runnable. So these are things that are going to do stuff. Which is super vague. But the idea is, right, we're going to provide information to these nodes and they're going to in some way transform that information. Or in this case, we're going to provide state, state object, and then it's going to move or modify state. Which is dope uh we also have this idea of edges edges are just kind of like paths we can take between nodes right we have a very special edge that we're going to talk about that greg's already touched on uh in a second but we're just going to start by first making some nodes uh we're going to make a call model node in a call tool node right the call model node we a call tool node, right? The call model node will call the model. That's crazy, I know. And then it will modify the state with the response of the model. And then our call tool is going to invoke our tool, get the response from the LLM, and it's going to modify our state to add the response from that tool call. Now, you'll notice we're using the asynchronous definitions here. This is because we can, right? Why not? I mean, LCL is fully integrated with asynchronous capabilities. So let's just start from the hop using async. So now we've got two nodes, right? Call model, call tool. What we're going to do is we're going to set up this state graph with our agent state. Remember, this is just a dictionary with key messages and then the ability to add messages into a list. So we're going to set up our state graph and then we're going to add two nodes to. Right? We're going to add the call model node as agent and the call tool node as action. And this is our graph right now, we have these two nodes, they're not connected, they don't do anything yet. So we have to we have to kind of, you know, complete the flow. So the first thing we're going to do is we're going to add an entry point. And we're going to set our entry point to be agent, right. And this is going to take our input, and it's going to move our input into our state object, and then it's going to pass that into our agent node. That's it, right? So this is our entry point. Next, we want to have the ability to call a tool when we need it and not call a tool if we don't need it. And so we're going to utilize this conditional edge. The conditional edge is kind of what it sounds like, right? It's an edge that's conditional on something. So in this case, we're going to check our state to see if it has a function call. If it doesn't have a function call, we're going to move to the end node. If it does have a function call, we're going to continue on that edge, right? So this is our conditional edge. And the way we add it is pretty straightforward. We're going to start or our source node or our origin node is going to be agent, right? So we're going to start at this green box. And then we're going to say, hey, move the state object through this function. And then depending on the output of this function, take one of two actions, right? And this is what we provide this mapping for. If we receive the continue response from our should continue conditional, then we're going to move to the action node. And if we receive an end response, we're going to move to the end node. And that's it, right? And the way we can express this is classic flowchart terminology. We can provide this agent. It's going to go to the should continue conditional edge. If we continue, if we get the continue flag from our function, we'll go to the action node. And if we get the end response from a function, we're going to go to the end node. Easy as that. Now, there's one more connection that's missing, right? Because we can get to action, but then when we're at action, how do we escape, right? So we have to add one more edge, and that edge is just always going to take us from action to agent, right? And so this is what we have here. So now we start at our input with our state object with nothing. We add the input message to that. We move it to the agent node based on the agent node's response, which is stored in that state object. We're either going to be done or we're going to move on to a tool. Once we get a response from the tool and we modify the state object to indicate we've got a response from the tool, we go straight back to the agent and we ask the question again. And we keep doing that, right? So this, I mean, you can see this is just a loop, right? We've got a cycle here, but we're able to have this loop or cycle be more expressive and we don't have to actually code a loop in, right? We just go to the next node, make a decision, go to the next node. We don't have any decisions to make here. We just go to this node, get response, make a decision, and then potentially go to end. That's the whole idea of laying graph. We're just constructing these graphs and these graphs are easily visualizable and help us to understand clearly what our system is capable of doing. Once we have it set up, we need to compile it. This is just kind of a classic step. This is going to make sure that all the inputs and outputs match up, that we haven't missed anything obvious, and it will raise alarm bells if you have. And it's going to be what lets us turn this into something that we're used to working with, which we're able to use just like it's a LCEL runnable, right? So we can just call a invoke on it and pass in our messages. This is our state dick, right? This is just the Pythonic representation of this dictionary that has key messages and then a list with a message in it, which is great. And we can call that and we can see, navigate through this whole list, that's great. Our input, we go to the AI message, the AI message is like, ah man, actually we gotta do a function call. Well, we know that this function call means we should continue. So we're gonna continue onto action. And we see that this action is a DuckDuckGoSearch, a function call. We know this function call means we should continue. We're going to continue on to action. We see this action is a DuckDuckGo search, which is huge. We get back this function message with some DuckDuckGo results. Finally, we have enough information to answer and we come to end since there is no additional quarks in this AI message. When we hit this conditional, we simply move to end. And that's it. That's the whole thing. We've got it written out here to make it more clear. And we do one more example. And in this example, you can see, you know, we both query DuckDuckGo and we query archive. We get a response. And then because we've got this response that clarifies information we needed previously, we make another function call to DuckDuckGo for the author's bio. And the query in question here is what is QLORA machine learning? Are there any papers that can help me understand? Once you have the information, can you look up the bio of the first author on the QLORA paper, right? So in order to do this last part, we need information from this previous part, and we can see that our LandGraph is able to navigate that. And again, we didn't set up any loops. We didn't set up any specifics of when it should end and how it should end. We just got it done through LandGraph. We let our agent be the thing that navigates that path for us. And it does a great job. So that's LandGraph. We let our agent be the thing that navigates that path for us. And it does a great job. So that's LandGraph. LandGraph is dope. But it is, I mean, you can see here, we're writing a lot of code. You know, we're having to do this whole, you know, code thing, care about async, all this other stuff. So wouldn't it be easier, right, if we had open GPTs? And we do have open gpt so it is easier uh and this is all all that we're gonna look at for the next few minutes is just you know how we use open gpts to actually spin them up is very straightforward so if you want to kind of go through this uh you know line by line start the back end that's great but they also provide everyone's favorite just docker right so you can just docker compose up as you see here and then you're gonna see this kind of beautiful ui that we see here so you'll notice a couple things about this ui it looks kind of similar to a different ui that we might have seen before. We'll just go to this one here, sorry. And that UI is the custom GPT UI from OpenAI, right? Once we're in here, we're able to do a couple things. First of all, we can start new chats. Second of all, and perhaps more interestingly, we can start new, or we can create new bots. So let's create a new bot. And let's look at the options that we have that Greg was talking to us about earlier. We have a simple chat bot, the simple chat bot. All it does is it's what you think, right? You give a text and it returns text from the LLM and on and on. We can modify its behavior through the use of an instruction, right? So in this case, one of the examples that we have is pirate bot. right? So in this case, one of the examples that we have is pirate bot. Basically, we just ask it to act like a pirate, right? And then it's able to do that. Now, that's obviously a kind of funny or, you know, toy problem. But the idea is that in order to do this, all you have to do is say, you're a helpful assistant, you speak like a pirate, right? Or whatever your use case happens to be. You have the GPT-5 Turbo. Select it as your LLM. You name your open GPT. You hit save. And then you can say, hey, what's up? You know, and we got it right there. And that's all we had to do. Just like OpenAI's custom GPTs, right? Very simple, straightforward process, which is awesome. Now, obviously, we can get a little bit more complex than that, right? So not only can we create simple chatbots, but we can also create a simple RAG chatbot. Basically, what this is going to let us do is it's going to let us add files here. So if we click add files, we can add, say, the book Frankenstein, right? And then we can say you're a helpful assistant. You speak like a Victorian scientist or, you know, whatever you need it to do. But the idea is, right, all we need to do is provide instructions of how we want this thing to behave, provide some data, and then we can select the engine that we want, and then we can just get this going. We click save, and then we can start interacting with it. And this is all we need to do. This is the whole process of creating, say, a simple rag. We can ask questions like, who was the monster in the novel? The monster in the creation of Victor Frankenstein, a being of gigantic blah, blah, blah, blah, blah, blah, blah, blah, blah, go on, man. The idea is that we can build these just like we would through OpenAI's custom GPTs. We get all the bells and whistles that we might expect, right? Where we can give this kind of feedback. And then in the final example, which is going to be our assistance example, we get something even cooler, right? We actually get tool confirmation. Tool confirmation is going to let us sit in the loop and decide if we should or shouldn't use tools, right? So this is super useful when it comes to helping people understand when we're using a tool, letting them be sure that they want to use the tool. And these kinds of, you know, human in the loop processes are going to help people really interact in an intelligent way with our system. Now, this assistant is the kind of closest thing to the actual custom GPT, right? Not only does it have access to a number of tools, including custom tools, which you can incorporate, if you're actually running this thing from the GitHub, and you've made some changes, or you've added some new tools, right, which again, is a is a more code solution, let's say, but it's still possible. But we also have the ability to, you know, have this extra kind of flavor or flair. We can connect all these fun tools and all these fun different cognitive engines. And again, in terms of the time to getting it going, right, we're able to build these tools very quickly. And that's the power of OpenGPTs, right? Beyond just, it's a cool framework. It also lets us build things very quickly in a customizable fashion. And it also helps get our, you know, gets the rest of our kind of stack involved, right? So we can have other people play with these tools and understand uh where they're best used and uh that's that's an incredible uh you know part of the feedback loop when you're creating these applications for a production environment so uh open gpt is super cool powered by our favorite land graph right so, right? So this is the behind the scenes, and this is kind of like the veneer, right? This is the top coat on that, which is pretty awesome. And with that, I'll pass you guys back to Greg, who will close us out for today. Yeah, thanks, Chris. That was an awesome walkthrough. Super detailed on both Landgraf and OpenGBTs. And just to sort of quick review here, Landgraf is all about adding cycles to applications built on Landchain. It allows us to create cyclical graphs and specifically it allows us to create these state machines by specifying them as graphs why state machines state machines because more control is generally desirable when we're actually building these applications in production for real companies than fully autonomous agents. Open GPTs, on the other hand, super easy to set up and use. Definitely highly recommend checking these out, especially if you just need to get a quick prototype with public data up and running. These can run on message graphs that are a special type of graph within Landgraf. They simply take in a list of messages and return a list of messages appended to the list. Now we have a new list of messages. It's not all that complex when you break it down. And so what we see today in the end is that Landgraf allows us to have these cycles, these loops, these states. It's an agent forward approach. And from the bottom up allows us to sort of do these things more intelligently. The message graph is very powerful and very straightforward, aligned with the other things we see in industry. And we can do assistance that are complex. We can do simple rag bots. We can do assistance that are complex. We can do simple rag bots. We can do chat bots. And with that, we'll go ahead and take any questions that you guys have. Wiz, I want to invite you back up here. And I want to kick it off with a question that I've been thinking about, which is like the old agent executor versus the new agent executor. agent executor versus the new agent executor? Like the old one was still able to do agents, right? But it wasn't sort of this agent forward thing. Like, is it a thing where it was kind of operating linearly and therefore was less compute efficient? Like what exactly is the difference between the old one and the new one? And how is it more performant in production environments? Muted. Sorry, I was muted. Sorry about that. I think what I would say is the idea of the old agent executor is great, right? But it's kind of inflexible. And we are stuck in that loop, right? Versus the Lang graph implementation or this new version, right? Where we're able to kind of let that reasoning body into this into this little box that we've created for it, but it can fully explore the paths within that box at its leisure, right? It's not just gonna say, hey, do this. And then, you know, if you're done, you're done, move on, right? Or if you're or if you're doing this, you know, you have to keep doing it until you until I get a response that I'm satisfied with, or whatever like this, right. But the idea is that by giving it that graph that it can navigate through at its own whim, right? If we want to really anthropomorphize the LLM here, we're going to have more flexible, more awesome behaviors. And we're also going to build that, you know, to Wolfram's point, we're also going to build that you know to wolfram's point we're also going to build that playground that we can have more emergent behaviors evolve right because it's going to be able to navigate through paths that uh that could change per iteration right uh or that could change per per call uh that the old style agent executor not necessarily wouldn't be able to do but boy it would be a bigger engineering effort to get it to do it and it would always kind of feel clunky and we're like we're trapped in the confine of that Loop right okay okay yeah yeah yeah so on each iteration we sort of have maximum flexibility to take the path of you know least resistance and optimal potential output, something like that. Okay, cool, cool, cool. So nice clarifying question here. Can you just clarify whether or not not open GPTs are using cycles for the chat bot and the rag bot, or are they only using cycles for the agent part? Great question here. So if we just have a chat bot, right, where all that happens is we provide a response and then the uh you know the the actual llm provides a response and so on and so forth we we view that as a cycle right uh we we have this idea that uh we are going to add state then the then the uh other actor is going to add state then we're going to add state then the other actor is going to add state, then we're gonna add state, then the other actor is gonna add state, right? We're kind of in this loop. Do we want to think of it as a cycle? It could be potentially useful. I think it's better though to abstract that into just turns in a conversation. But the OpenGPT for anything that does hit any kind of tool or has any decision, we would definitely wanna think of that as a potential cycle in our graph, yeah. Okay, okay. Yeah, we got a lot of questions coming in now. Unfortunately, we're gonna have to wrap up here soon. Another sort of clarifying question here. These are great questions from the audience. Can you talk about the difference between reasoning action, the REACT framework and LandGraphs. Is LandGraph also using this thought-action observation or is it just capable of using the thought-action observation? So we are still relying on a kind of observation reaction. This is the idea of the conditional edge kind of imparts that, right? So when we have that conditional edge, we're going to think about what's happened and then we're gonna make a determination on what the thought was, right? So if the LLM determines the, if the LLM determines that we need to use the the tool right we can consider that the thought and then our action will be the tool and that's why when we create the nodes what we do is we we actually you know have our agent node and then our action node uh where we we are trying to emulate this framework. Though it's a little bit more flexible and extendable, I think at the core, we can think of that piece of the system as existing within the React framework. Okay. Let's just rapid fire these three questions here real quick. Jacob asks, with looping through the tools, is it possible to get an infinite loop situation if conditions are continuously not beinging through the tools, is it possible to get an infinite loop situation if conditions are continuously not being met by the tools? Yeah, absolutely. Definitely. Yes. All right. All right. Yeah. Good stuff, Jacob. Forens asks, so is an agent the while loop of the LLM world? You end the loop on a given condition on the state? So with the graph example, it's like less cleanly that used to definitely be that, right? Like, so the old agent executor was, was basically a while loop, right? Just keep looping until you get, you either hit max iterations, right? Or you, you have the answer and we're satisfied and you say, Hey, stop, stop looping, bro. I think think this we can more think of this as like a a a a rat you know in a maze or or or navigating some complex maze right um as opposed to like a loop anyway rat in a maze not a loop for reds okay we'll pick that one up later uh openAI systems don't make use of webhooks. Super annoying. These OpenGPTs, do they make use of webhooks? I don't believe that they do by default, but they sure can be extended to if they don't already. I'd have to look into specifically how the Langsmith interaction works in order to answer that question super precisely. I don't think that out of the box, they do have web books though. If I'm wrong, I'll definitely post in the comments. But the beauty is that we have this ability to extend the OpenGPT framework because it's totally open source, right? It's not locked behind open AI servers. And so if you want to incorporate that, you are, I mean, do it. Right. Submit a PR, I'm sure they'd be happy. Yeah, yeah, yeah, yeah. Love it, love it. Great way to close it up. Yeah, go ahead and contribute if they don't have it. You know, maybe they should have it. Maybe you should build it. So awesome, Wiz. Thanks for the demo. Thanks for the Q and A. And it's about time to wrap up everybody. We're one minute over. So thanks for joining us today. Next week, we're going to be doing direct preference optimization and we've got a special event we're going to announce on Thursday. So definitely like and subscribe on YouTube if you haven't. So you can make sure you get those notifications. If you like this session and you're not in our discord community yet, we'd love to have you check out the AIM community. Come build, ship, and share with us today. And if you want to just tinker around with other stuff that we've done similar to this event, we're on YouTube live every week. You can check out everything in our AIM index. And of course, if you want to actually take a full-fledged certification course with us, we do have our AI engineering bootcamp. The next cohort is going to launch April 2nd, and this is a seven-week endeavor, very, very fast pace, but it's going to get you exactly where you need to be to build, ship, and share production LM applications in 2024. Finally, I want to announce, and we'll throw the link in here in just a moment, that we open sourced our most recent LLM Ops Cohort 1 course materials. So we open sourced our LLM Ops Cohort 1. It's not our most recent cohort, but it's our first ever cohort. That's fully open sourced on YouTube. It's available for everybody, all GitHub repos and everything is completely open. We'd love to get your feedback. So check that out. We'll throw the link in the description as well as on the comments if we don't get it in the live chat. So thanks for joining us today, everybody. Any feedback is always welcome. We love getting the feedback from you. So fill out a feedback form if you're still around. And as always, until next time, keep building, shipping and sharing, and we'll do the same. See you all soon.
LangGraph and OpenGPTs: Building Agent-Forward Applications with LangChain
3,819
AI Makerspace
20240222
GPT-4 Summary: Dive into the groundbreaking world of LangChain v0.1, LangGraph, and OpenGPTs in an event that's essential viewing for anyone interested in the cutting-edge of large language models (LLMs). Discover how LangGraph introduces cycles into applications for enhanced agentic Reasoning-Action frameworks, facilitating a loop of reasoning and action critical for building sophisticated agents. Learn about the power of OpenGPTs, running on the innovative MessageGraph for streamlined chat completions, and explore the creation of Chatbots, RAGBots, and Assistants with increasing complexity. Whether you're a learner eager to craft your own OpenGPTs, an AI Engineer seeking an introduction to LangGraph, or a practitioner passionate about the open-source forefront of LLMs, this event promises to equip you with the knowledge to build, ship, and share advanced AI agents. Don't miss our live code demo for a hands-on look at assembling a complex agentic Assistant! Event page: https://lu.ma/langraphgpt Have a question for a speaker? Drop them here: https://app.sli.do/event/kf9oC7MgvBb8eUnXfWSJJm Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/AIEbootcamp How'd we do? Share your feedback and suggestions for future events. https://forms.gle/BHzb2tJL9ZLVP5Va6
2024-06-13T22:17:31.201324
https://www.youtube.com/live/OXruvSd-Tk8?si=sRh0PuOlfQ6VK6VF
Hey Chris, in our first head-to- head tool matchup, who wins? Laying chain or OpenAI assistance API? That's a tough one, Greg. That's a tough one. Okay. All right. All right. But does the API, the assistance API do what it says on the tin? Yeah, definitely. For sure. Okay. Okay. So you're saying we can actually build the same agent-like thing two different ways with two different tools that could be deployed to a production environment. And you're going to show us how? Thousand percent. Yes. Dope. Dope. I'm going to get everyone up to speed on agents, and then we'll get you back up here to rock out our first application. The LLM wizard and CTO at AI Makerspace, Chris Oleksiak, will be back to lead code demos real soon. Thanks for joining us on YouTube today. If this is your first time, welcome. Say hi to our community and share where you're tuning in from in the live chat. My name is Greg Locknane, and I'm the CEO and founder at AI Makerspace. Today, you'll learn about agents and the React framework from first principles. We'll cover the abstractions you need to build agentic retrieval systems in both Langchain and with OpenAI Assistance API. If you have questions now, feel free to drop them into the Slido link that's in the YouTube description box, and that we'll share with you in the chat now. All right, with that, let's go ahead and get right into it. Agents, that's what today is all about. We're gonna learn as we align our aims for the session about agents. What are they? How should we think about them at a high level? And how can we actually build them? What are the constructs that we need to build these things both in Langchain and using the OpenAI Assistance API, building OpenAI Assistance? Now, last week's YouTube Live, we said we were going to go a little bit deeper into the new stack and ops for AI. I've opted to not do that today so we can focus a little bit more on an in-depth discussion of agents so look for more llm ops after it's already in production content coming from us soon today we're going to really drive down into agents we're going to understand what they are how to think about them how to stack up our understanding of prompt engineering all the way up to agentic behavior in RAG systems. We're going to see how to do it in blockchain and open AI, and we're going to build agent-powered retrieval systems in both. Agents are pretty mysterious, and they're kind of actually mysterious, like in real life. Here's a quote from a book called Complexity. Agents exist in many fields. They exist in many different domains. And this is why this term was chosen for the way that we're building these things out when it comes to LLMs and AI applications. Agents might be molecules or neurons or species or consumers or even corporations. Molecules form cells, neurons form brains, species form ecosystems, consumers and corporations form economies. At each new level, we get these emergent structures and these emergent behaviors. Complexity comes from this emergence. comes from this emergence. This is why it's so important to think about agents as we set out to build more and more, ever more complex large language model applications. This is how we're going to build some really incredible things in the 21st century. And it's going to start from very basic principles with very simple rules. Stephen Wolfram's work comes to mind. And this is something that's been well studied for a long, long time. This is a book that's actually quite old and talking about many ideas that are much older. But let's zoom into today and zoom down to the level of the developer. Recall basic prompt engineering. But let's zoom into today and zoom down to the level of the developer. Recall basic prompt engineering. We're going to build this up step by step. Prompt engineering and the best practices of prompt engineering go something like this. You want to make sure you're always providing very clear, very specific instructions. You want to give as much context as you can. You want to provide the LLM with information about what you want it to act like, its role, its persona, how it's feeling today even. We want to use additional context retrieved in, for example, RAG systems to put into the context window. And as we give inputs, as we give examples, as we give one shot, two shot, few shot examples, we want to use those examples and even expand on them. Think through them. Use chain of thought reasoning to actually improve our outputs. And our outputs, outputs and our outputs in many cases need to be specified especially especially in agentic applications because when we're going and we're picking up different tools along the way we need the output of an llm to be the input of another tool this is where formats that are industry standard for hitting APIs like JSON come into play. Digging a little bit deeper into prompt engineering, recall zero-shot prompting is simply instructions only. This is just like talking to a human. This is saying, hey, I'm going to tell you what to do, young human. And I'm not going to tell you any examples. I'm just going to tell you exact instructions of what I'm hoping you'll do for me. And this works pretty well, even with very young children. This is eventually where we want our AIs to get. But for now, you know, it works a lot better when we're engaging with these tools. If we give one example or if we give it many examples, let's say between 10 and 100, we can get better and better and better output and better results. How we can take it even to the next level is we can say, hey, rather than just giving an instruction or providing a few examples, we can provide thought through examples. Instead of a standard prompt, we can provide a chain of thought prompt, or we are providing the reasoning steps that we took to get to the answer. For instance, this example here is the classic example from the paper on chain of thought reasoning. Roger has five tennis balls. He buys two more cans of tennis balls each can has three tennis balls how many tennis balls does Roger have well Roger started with five two cans of three each that's six five and six is eleven the answer is eleven this is the idea if we provide this thinking through this step-by-step thinking through, we can get better outcomes. And if we provide this step-by-step thinking through, we can get better outcomes in our agentic-like applications as well. Now, what if we're not sure how to think through it? Well, what do we do with humans? We say, well, Johnny, that's not the right answer, but can you think through what the right answer would be? We ask Johnny to refine his answer. This is the idea of self-refinement, and LLMs can do it too. We can ask a large language model, well, that's not quite right. Can you think through that step by step? Here's an example from the paper that introduced this idea of self-refine. The user says, I'm interested in playing table tennis. The AI responds, I'm sure it's a great way to socialize. Stay active. The LM is small talking about table tennis. That's not really what I'm looking for. I'm looking to be engaged and understood. I want the LM to sort of have some user empathy here. So if we sort of provide that feedback, we can get a refined response. Hey, that's great to hear. It's a sport and this is what you need to know about it. And this is how you might play it. If you're looking to get into it, this is probably important information for you. And in this way, we can get better and better outputs. We can think through, step by step, what's going on. This brings us almost to agents. If we think about agents in prompt engineering, it's a nice gateway into agents in langchain. So agents in prompt engineering we can think through as being sort of self refiners. We can ask them, think through the answer that you just gave me step by step. Are you sure? Is that the best you can do? An example is you might say, instruction, you are a Python programmer. Write some code to solve this particular programming problem. You might get an output and you might respond. Can you optimize this code to have a lower time complexity? You might get a response, might be lower. Then you might ask, is this the best you can do? And you might get an even better response. Programming examples also were shown in the self-refined paper. For example, this is a way you could generate a sum of one to n. But this code is slow because it's a brute force solution. We can tell the LLM that, and we can get a refined version. This brings us to the point where now we provide some examples, we think through them, we could even ask the LLM to think through for us in a self-refined manner, to reason through what to do, and then to actually do it. Combining reason with the generation is what we see here, but we can do this even more automatically and we can do it even more generally by combining reasoning with action. And this is the idea of the REACT framework. React framework. The React framework from the React paper, this is the example that they provide. So let's go through it. Initial question, initial prompt. Aside from the Apple remote, what other device can control the program Apple Remote was originally designed to interact with? So we're looking to control this program without the Apple Remote. Standard prompt response, iPod, wrong. Thought through, reasoned response. Let's think through this step by step. Yes, let's. Apple Remote was originally designed to interact with Apple TV. Apple TV can be controlled by iPhone, iPod, etc. Therefore, must be iPhone, iPod, etc. I like how you thought through that but still wrong, right? Now, what do we do as humans? We're often getting on Google, searching, researching over and over, right? We might search for the Apple remote. We might find some information. Then we might find that we need to find the front row software. We might search for front row software eventually getting there this is the green here but we might not get exactly where we're trying to go unless we dig pretty deep into the front row software manual but if we combine the reasoning and the action into one holistic process, you might have a thought after the initial question. Well, I got to search the Apple remote to find which program it was originally designed to interact with. So I'm going to find that Front Row Media Center. Great. Now that I've found Front Row row media center i need to search front row and find what other device can control it fantastic searching front row searching front row software and alas we find you can control this thing with the keyboard function keys on your apple device boom got it reasoning action for the win. So what we're seeing here is we're seeing while we can use standard prompting and we can move to chain of thought prompting, this reasoning step is very powerful as is the action step, the brute force search that we're all so comfortable with, Googling things, right? It's the meta skill. But if we combine reasoning and action, we can get better and better and better. This really truly is the art of googling as a human. And so agents are automating this process. That's what we're doing with this reasoning and this action. So let's get into how we actually do this in tools. how we actually do this in tools. Line chain is up first. Line chain agents, they'll tell you straight up. They use the LLM as a reasoning engine and that chooses which sequence of actions to take. Agents are always going to have access to tools. So the actions it can take is it can pick up different tools. For example, it might pick up a Google or a DuckDuckGo search. It might get access to a repository of research papers like Archive. There are lots of different tools you could imagine getting access to. Maybe it gets access to your database or the weather outside today. There's many APIs you could imagine connecting an LLM to. There are also different types of agents in Langchain. The easiest one, the most general purpose one, is the zero-shot React agent. I think it's enough said, right? Each step of that reasoning action, right? That was zero-shot, just using instruction only. And the OpenAI functions is a different type of agent as well. This will kind of come into play today. Now, in June, OpenAI released function calling. They said that GPT 3.5 Turbo and GPT-4 are even better at outputting inputs to other functions, other APIs. And of course, Dev Day, they released JSON mode, which is even better at doing this. JSON mode specifies the response as a JSON object, and it provides a more detailed instruction at the system level. So we can be more comfortable, more confident that we're going to always return a JSON object. Okay. How do we build a RAG system with agents? This is going to get kind of meta, so stick with us. You've probably seen this before if you've been watching our content. This is a basic retrieval augmentation system. The idea is you ask a question, provide a query. That query gets turned into a dense vector representation by being passed through an embedding model. That dense vector representation is fed into the database where we look for similar dense vector representations when we find similar things to our query we put them into the context of our prompt so we set up our prompt template we say use the provided context to answer the user's query. If you do not know the answer, cannot answer, please respond with I don't know. So we take all the relevant context we found that was similar to our query and we put it into the prompt. All of that gets shoved into the LLM and out pops our answer. Now, if we want to build an agent-like system, we might only be concerned with the input and output of this RAG system. So let's abstract now. Let's turn this into a single system component. This retrieval augmented question answering system, we're going to turn into a single Lego block. All right. When we have this as a single tool in our tool chain, we can build more complex things. This is where we can start to visualize what the React framework is actually going to look like in practice. Here's an example where we have two different retrieval augmented generation systems here and here. One is built with a Barbie reviews index. One is built with an Oppenheimer reviews index. In this case, if we want to ask, hey, do Barbie and Oppenheimer share similar themes? We might need information from both of these systems. But let's check out that we need a reasoning agent and an action agent, React, right? Have a conversation with a human answering the following questions as best you can. You have access to the following tools. Tool one, tool two. These tools. Begin. The query comes in. The agent scratchpad is used for reasoning, the action agent and the agent executor executes the actions that are reasoned, and it goes to the appropriate tool in the chain. That output is fed back into the reasoning engine, them and we may continue on with additional reasoning steps or we may output an answer all right so briefly let's review langchain some of you may not have seen this before in detail langchain is all about combiningLMs with other sources of knowledge and computation, other tools in the chain, let's say. Langchain, if you look at their docs, number one is all about connecting it to data, connecting these LLMs to data. And number two, check this out, being agentic, connecting to tools, connecting to the environment like an agent. So the core concepts we need for this, we're going to use our chat style model. We've seen this before. Particular importance for today's discussion, we're going gonna focus on the user message and the assistant message. The user message and the assistant message come into play when we use the OpenAI assistance API, little foreshadowing there. But this is necessary in Langchain as well. Our prompt template, we're gonna use that in a number of places, including in the actual reasoning engine, as we saw. So we've got some prompting to set up. Then we're going to develop our chains. built Langchain expression language rather than the previous way we have shown in events to build these out. L-C-E-L or the Langchain expression language is important for you to start learning because not only is it an elegant and nice way to compose chains, but it also is purpose-built to be able to integrate with LangServe and LangSmith, which we plan to cover in upcoming events. Finally, we need to connect this thing to a vector database, vector store, and index of our data. So in summary, we're going to connect models, prompts, chains, and an index. And we're going to wrap all of this in an agent. So the pattern here that we're looking for is thought. I don't know the answer to the user's query. The action is to figure out which tool to use. Well, ask the RAG, ask the question answering system. The action input is to execute that question. The observation is based on the output. Did I find the answer? Well, maybe I'm not able to locate the answer within the actual data and RAG system that I built. So if the question answering system is insufficient, maybe I'll go ahead and I'll leverage a backup Google search, or today we'll see we'll use DuckDuckGo. Executing that Google search maybe allows us to find the context we need to get the answer we want, finally, as an output. So we're gonna combine a couple things today number one is going to be a sort of a meta rag system this is basically setting up rag to learn about rag we're going to search the top archive papers for rag and we're going to set up this as one Lego block. You may have seen this before. It's something we've shown previously. But the details here are not the point of today. This is the point. We're going to set up a query that goes into a reasoning agent and provides the action agent with the decision to make go to archive or go to DuckDuckGo. Which one? And this is going to allow us to answer some pretty cool, some pretty interesting questions using all the tools that we've talked about so far today. And with that, we're gonna see how we can chunk this thing out into specific bits of code with Chris, the legend himself. We'd like to welcome back to the stage. Are we gonna- Hello, I'm excited to walk through how we set up the flow in Langchain. So the first thing we need to do whenever we get into code is grab some dependencies. So we're going to do that here. You'll notice that we need this Langchain hub. We're going to use DuckDuckGoSearch. We're going to use Archive. And then we have to get some basic dependencies that are part of OpenAI and Lanchan. You will need an OpenAI API key for this. And we're using the GPT-4 Turbo model as an example, though you can substitute it with GPT-3.5 Turbo if you don't have access to GPT-4 yet. So the first thing we're going to do is set up our LLM. We're going to set up as a chat OpenAI model. We're just going to give it the standard zero temperature. We don't need creativity here. We just need it to understand context and then create an answer based on context. And then here, we're going to use our GPT-4 Turbo preview model. You'll notice that we need to use our messages template. So we're going to test it out, make sure it's working by sending the classic, hello, how are you? And receiving the very, you know, robotic, I'm just a computer program, so I don't have feelings message, which is great. So that's our LLM set up through Langchain. Thanks Langchain, So that's our LLM set up through Langchain. Thanks, Langchain. Makes it very straightforward. Then we're going to load the tool belt. Now, you'll notice that we're using this load tools method from our agents. And we can just specify the tools that we need. So we're going to specify that we need an archive tool. And we need a DuckDuckGo search tool. We're going to pass in our LLM as the LLM that powers each of these tools. And that's all we need a duck duck go search tool we're going to pass in our llm as the llm that powers each of these tools and that's all we need to do to set up our tool belt in langchain then we're going to create our agent using the lcel format first thing we need to do though is grab a prompt so you know as we saw as we saw greg, the kind of idea here is that we have this prompt that's going to let our LLM kind of, you know, simulate reasoning and then simulate tool selection and then, you know, additional steps as well. But the idea is that we need some kind of prompt that lets the LLM do this. And so we're going to grab it from the hub. We're going to use it from Harrison himself. And this is the React JSON prompt. So this is the reason action prompt. It uses this JSON format due to the fact that we're using the chat model. And you can see the prompt here, right? Answer the following questions as best you can. You have access to the following tools. And then we're going to create a list of tools. The way you use the tools is by specifying a JSON blob, specifically this JSON, and it goes on and on. But the idea is this is how we let our LLM make a decision on which tool to use, and that's what gives it that agentic behavior. Now that we have our prompt, we need to inject the information about our tools into it so it actually knows how to use the tools. So we're going to do that by rendering a text description, and then we're going to add the tool names as a list, and that's it. So the idea is that we now have a prompt, and that prompt is going to contain our tools, what our tools can do, and so on and so forth. And we're good to go. Before we go to the next step, though, we want to add this stop behavior so that when we see that observation flag, we stop generating. We don't need to generate past that. The idea is that this is where we want to conclude our generation as we're done, you know, we're ready to move on to the next step. Now we're going to use the LCEL format to build this agent chain. You'll notice that we have a single object here, which is going to be a bunch of different things separated by the pipe. The idea about the pipe is that it's going to essentially act as an operator that moves things from one to the next to the next. The idea here is that we take our input and we pass it along into our prompt. We also have our agent scratchpad. And our agent scratchpad is going to be where our LLM can record its quote-unquote thoughts, right? And then we're going to populate our prompt with both of these objects, the input we get from us. And then the intermediate steps is generated through the LLM as it makes subsequent calls to itself. as it makes subsequent calls to itself. LLM with stop, this is just saying once we have our prompt, which is formatted with our user's input and the agent's thoughts or the agent's scratchpad, this prompt will be populated with that information. Then that will be piped into our LLM. Our LLM will produce a text response, and then that will be piped into our React JSON single input output parser. All this means is we're going to get a response in JSON, and we're going to convert it or parse it back to a format that we desire. But the basic idea here, right, is straightforward enough. We take our input, we format a prompt with it, we pass that format a prompt to our LLM, and then we parse the output. All we need to do now is wrap this in our agent executor. It's awesome that the LLM can make decisions about what tools to use, but we need to actually give it access to those tools or a way to call those functions. And that's what our agent executor is going to do. We're going to pass our agent brain, our tools. We're going to set verbose equal to true so we can see all the outputs. And then we're going to let it handle any parsing errors that might come up due to incomplete or malformed JSON. Once we've done that, we're good to start calling it. So you can see here we can pass it an input. That input is, what is retrieval augmented generation? You'll notice that we enter our agent executor chain. The agent has a thought, retrieval augmented generation is a concept of field natural language processing machine learning. It goes on and on. It decides on an action, and that action is as follows. It's going to query the archive tool with this particular query. It's going to get a bunch of responses from the archive tool. It's going to use those responses to synthesize another thought. The thought is going to be the articles found provide a good overview of, and it goes on and on. It says with this information, I can now provide a comprehensive explanation of retrieval augmented generation. And then it populates a final answer. So to go over those steps again, just to be clear about them. The agent initially receives the query. It makes a decision about which tool it should use or if it should use the tool at all. It decides that it needs to use the archive tool, gets a bunch of information from archive, figures out if that's enough information, figures out that that is enough information, and then populates the response. And that's the idea of this system. Now, the thing that makes it agentic or the thing that lets it be more powerful than just being a rag in a box is we can ask questions like, who is the current quarterback of the Denver Broncos? And you'll notice here that we enter the same chain, but the thought is that it should use a DuckDuckGoSearch. And so we wind up using a DuckDuckGoSearch with the query current QB of the Denver Broncos. We get the information we need. It says, I have the information we need. And then we get the final answer right here. So the idea here is that we don't have to tell it which tool to use. We just have to provide all the tools and then query it. And the LLM agent is going to choose the correct tool. And that's going to save us a lot of time. We don't have to hard code anything. And we can have our LLM have access to a broad suite of really powerful tools. So that is how we set this all up in Langchain. I'm going to send you back to Greg, and we're going to learn a little bit more about the OpenAI Assistance API before we see it in code. Yeah, awesome, Chris. So that's how you do it with the classic tool. How about with the new guy? You know, sort of traditional tool we just saw, right? That's how quickly these things are moving. Well, let's take a look at OpenAI's assistance API. What we're going to do here is we're going to build an agent-like application, and we're going to use tools. Now, you may have seen you can use Code Interpreter, you can use retrieval, you can use function calling. We're going to do a bit of a retrieval process here, but we're going to choose to leverage function calling for it. So this is going to be pretty cool. And Chris is going to walk us through the details. Sometimes it might make sense to go ahead and just actually use a function call to a repository like archive rather than uploading your own data. And so to build the same system in OpenAI assistance, we're going to need to know a couple of different constructs that add on to things we may have seen before. we may have seen before. First of all, the definition of assistant for OpenAI is a purpose-built AI that uses, of course, OpenAI models, can access files, and maintains persistent threads across the entire interaction. Of course, you can also call tools, as we've heard about many times so far today. A thread is simply a conversation. Everybody knows what a thread is, right? I mean, a thread is just opening up a convo. But here, a thread is between the assistant and a user, you and the AI, let's say. And storing that message history, that chat, that thread is the point of the assistant here. Because remember, when you hit the OpenAI API as a developer, you need to specify what you're putting in. You need to leverage the system level role to provide those instructions. And you need to leverage the user and assistant level roles. Oftentimes we're leveraging it in a context like this, where we're providing the assistant level and the user level inputs. This is an example of how we might provide who won the World Series, the Los Angeles Dodgers won the World Series. We're providing a question and an answer. We're providing a one-shot example of the way we want to engage. But here, the assistant is not us acting like the AI. It's actually the AI engaging with us. It's our assistant and so the assistants flow here is we create an assistant by sort of defining the specific aspects of it which tools it needs to connect to then we create a thread we add messages to the thread and as we're running the assistant on the thread, we trigger responses. This is what calls the tools and allows us to do the same thing we saw how to accomplish in Langchain. So the flow is something like this. We set up the assistant where we have our name, our instructions, our tools, and the model we're going to use. Then we kick off our thread, and we then run the assistant on the thread. To see how this is done and actually implemented line by line at the syntax level, head back to Wiz to show us how to get her done. Back to you, Chris. Hey, thanks, Greg. I'm excited to walk through how we do this in OpenAI. how we do this in OpenAI. So the idea is pretty straightforward. So basically we're doing exactly what we did in the Lightning Chain with a few extra steps, but we're still doing exactly what we did in the Lightning Chain, right? So the idea is first we need to set up some kind of system that lets us interact with our external search tools. In this case, DuckDuckGoSearch and ArchiveSearch. So notice that we set up some functions. The functions will take a string, and they will return a string. So you can see here, we can ask things like, who's the current captain of the Winnipeg Jets? And we get a number of responses about the fact that it's Adam Lowry. Then, same with archive search. We can send it a string, retrieval augmented generation, and we get a string, which is just a list of summaries from the papers most relevant to that query. Okay, so that's awesome. Let's look at this DuckDuckGo search function a little bit more. So it takes one parameter, query, which is a string, and it returns a string. And we can use it to search the Internet. All right. In order to let OpenAI call these functions, we need to provide it the appropriate format with which to call these functions. Because it doesn't have our Python functions loaded on its side. So it has to send us an output that we can use reliably to call our functions, the function calling API, right? Now we're going to use this format to create these function objects. So we'll give our function object a name, which is going to be dot dot go search, a description, answer non-technical questions, do not use this tool for questions about machine learning. We're going to give it parameters. And we know it takes one parameter, and that's called a query. We're going to say the type is an object. It's a query object, so that's its name, it's going to be a string, and here is an example of that string, right, the search query to use, for example, and here's a small example. And we're going to say that this query is required. We only have one parameter and we, it's not optional, there you go. So this is just a way to address the function we want to explain to OpenAI how we need to call it, what we need to call it with, and what that should be. Once we have this, the important thing we need to do is provide this description. Now that description is going to be what lets us choose when we call it. And when I say lets us choose, I mean, of course, OpenAI. So this natural language description is what will guide the LLM toward the correct function. So it's very important, right? This is like a prompt engineering opportunity. We'll do the same thing for archive, but we'll make it answer questions about machine learning. And we'll do the same thing as we did before, where we give it a description of the parameter, what it should be and how it should look. Now that we've done this, we can create an assistant. So we're going to first create our OpenAI client. This is going to use the OpenAI key we gave already. Then we're going to create an assistant. We're going to give it a name, which is Query Assistant. We're going to give it an instruction. The instruction is you are a personal assistant. Use the provided functions to answer questions. Then we're going to give it some tools. Those tools are going to be the functions we created. You'll notice, though, we're not passing the actual function. We're passing the object associated with the function. This is because OpenAI can't really call our function because it doesn't have it. So we're going to give it the archive function and the DuckDuckGo search function. We're once again going to use GPT-4 Turbo. Then we're going to grab our assistant ID. This is the ID of our assistant. Makes sense. Then we're going to create a thread. Now, Greg already explained to us what threads are. Basically conversations. Within this thread, we're going to create a message. And that message is going to be a user message with the query, what is the capital of Ottawa? You can see that we're going to, from the created thread, we're going to reference the created thread for our message. We're going to give it the role of user, and we're going to give it the content, which is our query. Now we have an assistant, and we have a thread. We're going to use our assistant with our thread, so we're going to run or create a run on our thread that leverages our assistant. So you can see we specify our thread ID that we created above that has this message, and then we're going to use the assistant that we created above through the assistant ID. Once that's done, we're going to need a helper function, which is taken from a helpful repo. Link is in the notebook. All this does is wait to retrieve the run until it is done. So it will just loop forever until it is either errored or completed. You can see here that when we wait for this run to complete, all we need to do is pass the run ID as well as the thread ID. Remember, we have this idea of a thread and we have this idea of a run. So we need to keep track of both of those as we move through our system. Then we have this submit tool outputs helper function. The basic idea here is that our response might sometimes be in the format of requires action. Requires action means the open AI has determined we need to call a function here, right? So it's going to say, it's going to deliver to us a blob we can use to call our function, but we need to get that response in order to continue. The idea of this tool is that we're going to get the function name and args from our response. We're going to call our function depending on which one we have, which one OpenAI has determined we need so either duck duck go search or archive query and then we're going to append these tool output array objects with the tool call id and the output from our function so all this is doing is taking that blob from OpenAI, calling our function on it, and then sending that output into this object. Once we have this object, we can use the threads.runs submitToolOutputs method, which is going to let us send a tool output array along with our thread ID and our run ID. And that's going to continue that conversation, but with the results of the function call. Once we have that, we can wait for the run to complete, at which point we'll receive our natural language response, which we print with this helper function. at which point we'll receive our natural language response, which we print with this helper function. So let's look at a function here that does all of that in one box, just to make it a little bit easier for us. We're going to first create a thread. Then we're going to create a message on that thread with our user's query. We're going to then create a run, and that run is going to be executed on that thread with this assistant. We're going to wait for it to complete. If we require action, we're going to submit the tool outputs back to our assistant, and then we're going to wait for that run to complete, and then we're going to pretty print everything at the end. So the idea, just like Langchain here, is that we have this decision-making process with this information retrieval step, right? So when we first execute, this is OpenAI, the LLM making a decision, should I use a tool or not? If it does use a tool, it sends back to us that we require action. We use the tool to collect the relevant information using the query that it generated for our specific tool. And then we use that to synthesize a natural language response to our original query. So it's the same system, but it's using OpenAI's tools instead of using that lang chain infrastructure. We can do things like ask, what is QLora? You can see that it's consulting archive. Once it's done, we get this QLora definition, which is a great explanation of QLora. We can ask, what is a meme? You can see in this case, it doesn't consult. Archive, it consults DuckDuckGo. And we get a response that is an explanation of a meme. And so that's how we build that same thing we had in Langchain with the OpenAI assistance API. And with that, we'll kick it back to Greg. API. And with that, we'll kick it back to Greg. Chris, that was awesome, man. And super cool to see how we can do retrieval through function calling. Sort of meta and pretty next level. Absolutely loving it. Today, in conclusion, we learned about agents. It's all about these simple rules access to these simple tools providing the complexity of future lm applications and that emergent behavior that we're inevitably going to see as we continue to build ship and share these more and more ever more complex ai systems agent like behavior is all about automated reasoning re and decision making or acting act the pattern here is thought action action input thought, action, action input, observation, and thought once more as we go through the reasoning action loop. It's easier than ever to build and deploy these things. We saw how to do it in Langchain with the React framework. We saw how to do it with OpenAI's assistance API. And that brings us to the Q&A portion of today's event. I'd like to welcome Chris back up on stage and I'd like to leave you all with this meme that we shared a while ago on our LinkedIn. This is a great way to think about it. Batman's got the tool belt. He uses his head to decide which tool to pick up. This is kind of what we're doing at the end of the day with agents. And finally, thanks for jumping in there, Adam. Now, there you have it. You saw it for yourselves. Who wins? What do you think? Adam thinks Langchain. Drop your thoughts in the live chat or in a YouTube comment. Langchain versus OpenAI assistance, our first ever tool versus many more to come. All right, let's go ahead and jump into the Q&A here, Chris. First question of the day, and go ahead and drop these in Slido if you have more questions. Can we build our own agent from scratch? Sure. I mean, it depends on how you mean that, but I mean, sure. Yes. You could build your own agent from scratch. If what you mean like is your own agent framework, that's kind of a research problem, so it's up to you and your team. But if what you mean is like any agent that does a specific set of tasks, then yeah, I mean, both of these approaches we saw today would be helpful to get started on your journey of creating an agent. Okay. Okay. Let's crank through some of these open source models. Cool too. Is this possible to do with like a Lama two 70 B and a react agent? I'm going to say that that is not a firm yes, but it's a pretty good yes, with some fine tuning and some careful consideration. You can definitely allow systems like Lama 2 and with some guardrails to ensure that tokens are being generated properly and that everything's in the right format. Yes, the brain piece is actually not the issue. It's just getting the specific syntax and formatting locked in. That can be a little bit more tricky with the open source models. But there are lots of methods that would allow you to do it. It might not be as out of the box, but if you're already using Lama two seven B, you're used to not straight out of the box experience. So you're already using Lama two, seven B you're used to a not straight out of the box experience. So. Okay. Okay. So what's really going on with the brain making decisions is there's actually a little bit of instruction, a little bit of prompt engineering behind the scenes that maybe we don't see when laying chain or open AI abstracts. Is that kind of what we're getting at here? Yeah. I would say like the thinking part is no problem. It's being able to ensure that it outputs its thoughts in a consistent and standardized format. That's a little bit less consistent on the open source models. But again, through some fine tuning or through your methods that guide token generation, you should be able to set it up so that it's much more reliable. And next question, I believe we did sort of cover this. Maybe you can click in on it a little further, Chris. Is an agent able to search through different RAG systems, like different vector DBs, different sets of data in reaction to a single query? Oh, yes. Hopefully we saw some examples of that. The idea is depending on your actual query, it's going to select the right RAG system. Now, you might ask a question that is relevant to both halves, right? And language chain will actually use both halves. Or say you have 30 different repositories of data that you're trying to synthesize a response from. It's going to intelligently, intelligently in quotes, select all of the correct systems. And that could be one or more than one. And it's going to be relevant to the query. Yeah. Okay. Here's a good one. Here's a good one. Okay. In the React framework, where do the thoughts come from? Where do those actions come from? Where's it coming from? Where's it all coming from, right? I mean, it's the LLM. So the LLM is going to generate a response, just like you see when you prompt chat GPT. If I ask chat GPT, hey, chat GPT, I have a spoon, a fork, and some soup. What should I use to eat the soup? Chat GPT is going to respond with, you should probably use the spoon, my guy, right? So that's the thought. That's exactly the process of how it's generating the thought, right? And then the action is on our side, right? It says, hey, I think you should use spoon. And then we say, call spoon function then. And then we return that response to the LLM, which is going to carry on with more reasoning. And then we're going to be able to take discrete actions based on that reasoning. on with more reasoning, and then we're going to be able to take discrete actions based on that reasoning. Eat the soup with the spoon now, right? Yeah, of course. Of course. Yeah. Okay. So next up from Lax, we have which of these approaches is better in terms of time of execution, tokens used, and hence cost, and accuracy. I think he's referring to Langchain versus OpenAI assistance. What a good question. It's difficult to answer succinctly or concretely. What I'll say is that if you can, so it's kind of like the time to lift off with Langchain is a lot less, right? The time to lift off for the open AI systems, there's a lot more engineering you have to do. And we didn't get to the point where it can re-query itself here. We would need to do additional engineering for that. But the idea is that time to lift off is very low with laying chain. But the open AI assistant has an advantage, which it's likely to use less tokens overall. So it might be less costly. There's nothing to say that has to be true. It's not a rule. But you can, because we're using this functional API paradigm versus the reasoning action paradigm, it's, you know, oftentimes the assistant or OpenAI function calling methods can be a little bit cheaper over the lifetime of your application. Accuracy, they're both, if the source model is the same, it all comes down to what you have written as the prompt that's going to determine accuracy in both cases. Okay. Okay. But can I fine tune? Next question. What do you think about this? Not with the open AI assistance right now. I don't believe that it's leveraging fine tuning. Though that feature is supposed to be coming soon, as I understood from Dev Day. I don't believe it's currently active. I could be wrong, so I'll double check, and then I'll post a comment if this is incorrect. But as of right now, I believe the fine tuning option is not available through the OpenAI Assistant Suite. If you use Langchain, you could use an open source model, in which case, of course, fine tuning is fully available. You can also use just the OpenAI fine tune endpoints if you've fine tuned GBT 3.5 Turbo. So, I mean, the answer is yes. For Langchain, for sure. For OpenAI assistance, it should be coming soon. Coming soon. So this is an interesting question. What are the best small LLMs to perform simple reasoning, like to decide which tool to use? Is there one? I'm going to hot take here for a second. If it's a really easy binary decision, you could probably just use a classifier model, a simple classifier. You know, you don't even have to get as big as your, your BERTs, right? So something like Distilbert, whatever, even smaller classifiers. In the case of more complex behaviors, something like, I think Zephyr or models in that 7B and Structune range are going to be the best models to use. Going less than that is definitely possible. But again, we run into this issue of having these consistent outputs. The actual thinking or reasoning skills of some of these larger models is also better. So I would give as like a generic should work most of the time response. I'm going to say like the Zephyr suite of models is where you should be looking right now. Hot take. Distill Roberta for your reasoning engine. Yeah. Yeah. Love that. Oh, binary only. Binary. Okay. LLM as a judge, which is the better choice? Lane chain evaluator or rag assessment? Ragas. A little bit off topic and off the trail here, but we're going to go. I'm going to hot take here again. They're the same thing. You know, it's like, do you want a yellow bus or do you want a zone bus? Right. It's the same thing. It's just a different word for the same thing. I mean, the LLM as a judge features are largely come down to the prompt, the prompt style. Both of these ragas and langchain evaluators, can be modified and customized. So at the end of the day, it's going to make an API call with a certain set of information to the LLM with a prompt that you provide. So they're similar. If you're looking for the best out-of-the-box solution, if you are in the Langchain ecosystem, obviously staying in it is going to be easier for you. If you are not in the lane chain ecosystem, just go with Ragas. It's very good at what it does. Okay. Okay. One more that's a little bit outside the box here, but let's go ahead and hit it anyways because it's the last one on the Slido. How to handle latency for text-based models, for multimodal? Async is your friend. It comes down toodal. Async is your friend. It comes down to a lot of async is your friend. Unfortunately right now, latency is huge on everything all the time. The assistance API does help with that through this idea of threads. We have many asynchronous threads running. We can pile up more and more threads. For multimodal though, I mean, you're just, you have to wait. At some point, you have to wait for a very large piece of compute power to do a lot of computing. And so latency is always gonna be a problem, but I would say whatever you can parallelize, you should, and then whatever you can do asynchronously, you should. Wiz, all right, man. Close it out. Thank you so much, dude. If this resonated with you today, thanks for coming. We hope to see you again next week at our event on Transformers. Then we're going to be doing a super deep dive for like three hours, soup to nuts, Transformer from scratch. We thank you for joining us today for sure. And if you want to go deeper on these topics, the other thing that you might think about next week is we're launching our cohort three of our LLM ops, LLMs in production course, where we're going to dive into all of the AI engineering best practices. You're going to be able to get in there and build your own agentic RAG application, and you're going to basically be a pro at the systems we covered today by the end of it, by the time we get to demo day. If you have questions, feel free to throw them in the chat, throw them in YouTube, or reach out to us in response to any direct mail, and I'll get right back to you myself. Other than that, please share any feedback you have on today's event. We're going to drop a feedback survey in the chat. We'd love to hear from you. How can we improve? What would you like to see next? We're going to stay on this Wednesday vibe, and we're going to start to be a little bit more consistent with the 1 p.m. Eastern, 10 a.m. Pacific each Wednesday. We wish you all a really happy Thanksgiving tomorrow, and we really look forward to seeing what you build, ship, and share in the AI Makerspace community and beyond on Discord, on LinkedIn, in YouTube in the future. Thanks so much, everybody. We'll see you soon.
Agents: LangChain ReAct vs. Open AI Assistants API
3,651
AI Makerspace
20231123
GPT-4 Summary: "Discover the Future of AI: LangChain & OpenAI's Latest Tools for Building Complex Applications" - Join our must-watch event for AI enthusiasts and professionals! We'll dive into the innovative world of Agentic apps, exploring LangChain's breakthroughs in data-aware applications and Chain-of-Thought reasoning. Get an exclusive look at OpenAI's Assistants API, revolutionizing the development of agent-like applications. This event is a game-changer for LLM Operations practitioners, aspiring AI engineers, and builders eager to harness the power of LangChain and OpenAI's Assistants API. Don't miss out on learning how to create cutting-edge AI systems, with all demo code provided for you to build and share. Click now to unlock the secrets of advanced AI applications! Event page: https://lu.ma/Langchainvrsopenai Have a question for a speaker? Drop them here: https://app.sli.do/event/np3mh88VswLrRYsMpifeon Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Apply for the LLM Ops Cohort on Maven today! https://maven.com/aimakerspace/llmops Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA How'd we do? Share your feedback and suggestions for future events. https://forms.gle/az6cQrPNoc7f7cgy6
2024-06-13T22:24:30.958353
https://youtu.be/fIDxnTe4mBA?si=0KNX4QZ_2kXjfAxQ
Welcome, everybody, to part two of the Chatterversary Hackathon, and this is going to be the deep dive overview of RAG. So let's jump right into it. We're going to go ahead and align our aim here. We want to make sure that we are understanding each and every one of the core components of RAG. We like to break down RAG into two primary pieces, dense vector retrieval and in-context learning. We're going to think of RAG throughout the day as question answering over documents, and we will build a RAG system in the next portion of today's event. Now, it's important to think about why RAG. This is from Deep Learning AI's The Batch this week. And it's amazing that just a year after ChatGPT, the number of closed and open source models that we have to learn from, to leverage, to pick up and use in our work. From OpenAI to Microsoft to Google to Anthropic inflection perplexity there's all sorts of models out there today and the apis that we can use to pick up models off the shelf AWS all the big guys in the cloud hugging face of course absolutely legendary and you know there's just so much proliferation of these skills and tool sets that it's important to be able to sort of start building. And if you're going to build anything today, rag is the thing. You know, rag, this was put out by Langchain the other day in a deconstructing rag blog. Highly recommend this as a read to everybody out there. But the interesting thing too is that if you watched Andrej Karpathy's recent talk about the LLM operating system, RAG is sort of this piece. And I really love this sort of connection. So at a high level, if you're okay with prompt engineering, it's time to start getting okay with RAG. And that's what we're really focused on building out today. So what is RAG anyways? Let's talk about it. Let's think about these two core components it breaks down into. And let's think about how we can build RAG systems. Now, generally, you want to think of building LLM applications in three steps, and this is how you want to build today. You want to prompt engineer as far as you can. Then, if that's not working well enough, you want to introduce some additional context through your documents. and RAG systems as question answering systems in their most basic form. And these are going to be really helpful to getting our system set up to do the things we want it to do. If you're going to be a coach, if you're going to be a tutor, if you're going to be an assistant, you need to be able to answer questions very well, right? And so this is the big idea with RAG. When we talk about fine-tuning, there's a lot that goes into different parts of fine-tuning. That's a little bit beyond the scope of today is to cover all of fine-tuning. But it's important to understand we will cover fine-tuning embedding models. And it's sort of this new emerging paradigm where we're saying, okay, well, actually, it's not a linear thing anymore. It's not prompt engineering, then RA well, actually, it's not a linear thing anymore. It's not prompt engineering, then rag, then fine tuning. But rather, these are two different dimensions of optimization. One is optimizing the context, what the model needs to know. One is optimizing the actual language model, how the model needs to act. Now, you can say this is more of a two-dimensional thing. I think this is a useful mental model, but oftentimes you do end up still going, prompting, adding some examples, adding some RAG, then doing some fine-tuning. It does look linear in many cases, and it does get iterative between RAG systems and fine-tuning in the limit of trying to achieve human performance. That's for sure. Okay, so what is RAG anyways? Well, let's consider that RAG is question answering over documents. Let's look at what a final demo might look like, right? So this is a simple, the most simple RAG system you might build. This is called chat with your PDF. that is not where it's supposed to be. That is Isaac Asimov's iRobot. And if we take this PDF, we can potentially go ahead and ask this guy. We can ask this guy in, it's going to process it. It's going to set this up for us to be able to ask questions. In one or two word summaries what are the three laws of robotics it's going to use a conversational retrieval chain and by the way the live demo gods are already getting their fury out on me so hopefully they're kind to you by the end of the day today. And what we'll see is we do have the three laws of robotics. No robot may harm a human being. Robot must obey the orders. In one word each, what are the three laws? And we can sort of engage with this thing and get, based on our input and based on the retrieved documents and information, we can get some really interesting outputs, right? First law, protect. Second law, obey. Third law, preserve. Let's look at the source material here. We're seeing, oh, okay. So look at this. Self-preservation is the third law. Preserve, all right? Very fact-checkable outputs here. This is the big idea with RAG, fact-checkable outputs. All right, so this is as simple as we can get. And if we were demoing this, we can get. And if we were demoing this, we would probably add some additional next level pieces to it. What kind of next level pieces might we add, Wiz? Well, Greg, basically, we're missing quite a few things here. Number one would be better sourcing. So having the sources perhaps attached to the, you know, the actual PDF. Number two, we're doing very naive retrieval here. We're just kind of chunking and then retrieving. There's a lot of different kind of retrieval methods that we could explore to prove the application. We're not doing any real monitoring or evaluation here. We're just kind of plugging it in and away we go. We're using all closed source models. So we could extend this to an open source model. We can even use open source embeddings. We could then fine tune these pieces to provide us better generation and retrieval. Really, the sky's the limit here in terms of what we could do with this base application, but those are just a few examples. And those are the kinds of things that we're going to be talking about for the rest of the day today. So this is at a high level, the simplest possible RAG system. And what's the big idea here? Well, the big idea is that if you just ask LLMs questions, they're not necessarily fact checkable, and they will give you confident false responses sometime. So fact checking is very important. It's very important in life, right? Nobody likes fake news, and it's very important as we build AI applications, perhaps more important. So retrieval augmented generation is simply talking about going and finding references in your documents, adding those references to your prompt, augmenting the prompt, so to speak, and then improving your generations. Generative AI, right? That's it. Find them, add them, improve, rag. That's all there is to it. And when we talk about this being useful, one of the things it's very useful for is when you have very specialized domain language. So we had a few legal applications, people were thinking of building. This is true if you have legal jargon, if you're talking to real doctors, if you're working in the financial industry, government acronym soup, researchers have their own special language in each building at the university, and so on and so on. Also, put a pin in this. This is the place where you might want to consider fine-tuning your embedding model as well. So what is RAG? Well, RAG is, as we like to break it down, dense vector retrieval and in-context learning. The R in RAG comes first. So let's go part one on the R in RAG. RAG is, first off, dense vector retrieval. One of the most commonly misunderstood things is a classic NLP thing, the idea of turning words, text into numbers, embeddings. This is going to allow us to view words in an embedding space relative to one another. So this is an example we like to show from cohere's blog where would you put the word Apple can I get some answers in the chat here where does Apple go C crushing it Victor he's on it right by the bananas and the strawberry and the grapes cherries cherries Cherries? Cherries. All right. So what about where does cow go on this one? A. We got A from Manny. Anybody else? C from Taz. C from Victor. The answer is actually C here because puppy is to dog as calf is to cow. So you might actually say this is a dimension of age. Maybe this is a dimension of size or height or weight. And the interesting question is though, where does the legal rag system fit in on this chart? Where does the LLM wizard fit in on this chart? Where does the Millennium Falcon that Sarah is sitting in right now fit in on this chart? Millennium Falcon that Sarah is sitting in right now fit in on this chart it's not clear we need more dimensions of embedding space this is the idea of many many dimensions of embeddings let's say over 4 000 in some cases again from cohere here just to represent a single word relative to lots of other single words of course you can do embeddings at the token or subword level, and you can also do embeddings at the sentence level, as we will show later today. But the most classic example, of course, is from 10 years ago now, 2013, Word2Vec comes out. Word2Vec, you get it, right? And we can do classic things like king minus man plus woman is about queen. That's when we know our vectors are kind of working. And so the idea is that this, even back then, they called out, they said, well, this is really useful for many downstream tasks. We obsess over this language today with LLMs. It's very important to focus on downstream tasks. We obsess over this language today with LLMs. It's very important to focus on downstream tasks. And Word2Vec is one of the most iconic and earliest examples of dense vector representations. Dense vector. Today, we have much better embedding models, like OpenAI's Ada, which we'll be getting started with today. You can also find the latest and greatest, that new hotness on the massive text embedding benchmark leaderboard from Hugging Face. So when we think about now putting the R in RAG part two, now what we're talking about is we're talking about retrieval. We got the dense vector idea. Now let's talk about retrieval. Here, we're going and we're putting a query in to an embedding model. That's going to convert that query into a vector. And then we're going to go to a vector database and we're going to look for stuff like it. Right? So the big idea here is there's three easy pieces to retrieval. You ask a question, you search the database for stuff that's similar, and then you return the stuff that's similar. And when we talk about now creating that database or creating that index, shout out Lama Index, right? We're often talking about as we get started, there's only one type of index we need to care about. It's called a vector store or a vector database or a vector store index, or just an index. There's lots of ways to talk about it. We're taking the docs, we're chunking them up, we're pushing them through an embeddings model, and then we're creating our vector store. Why is that important? Because then when we go to actually ask a question, we push it through our embedding model, we get a vector, and we look through a cosine similarity generally to find nearest neighbor similar things within the database, within the vector store. So why is it dense vector retrieval? Because we ask a question, it's a vector. We search the vector db for stuff that's cosine similar, get it back. Then we return the stuff we found in natural language, right? So we've got our query. We go to find similar stuff. We're setting up now our prompt template. That's what we're going to talk about next. It might say something like, use the provided context to answer the user's query. You may not answer the user's query unless there is specific context in the following text. If you do not know the answer or cannot answer, please respond with, I don't know. that it was directly in the prompt, the prompt playground that we were able to see this exact prompt that we used in the Chainlit application. So then we go ahead and we take our similar context from the vector database and we shove it right in to the prompt, reference one, reference two, reference three. We saw this before. Finally, all that stuff is in the prompt. We shove it into the LLM. We get our answer. Bada-bing, bada-boom. So this prompt template piece, this piece is containing this idea of in context learning where does in context learning come from in context learning comes from the classic paper right language models are few shot learners this is GPT-3 during the release they introduced key concepts like one-shot learning, few-shot learning, in-context learning, here we are, and meta-learning, which is pretty meta. We're not going to get into it right now, but definitely check it out. It's definitely worth understanding if you're into these sort of prompt tuning things and other things that are sort of half out, half in the model. So scaling up models greatly improves task agnostic few shot performance what are we talking about well this is an image from the paper and the image shows that with more examples in context we get better accuracy this is representative of many types of tasks better accuracy. This is representative of many types of tasks. And the big idea is that larger models make increasingly efficient use of in-context information. We're simply talking about prompt engineering here. With zero shot, that means instruction only, right? translate the following word. We're not doing so hot, but with just one example on a large enough model, we get massive improvement. Between, let's say, 10 and 100 examples, or 100 and 1,000 examples, we're getting much, much better performance. we're getting much, much better performance. So the idea is, you know, give examples in your prompt, give a few demos, right? Use the input to the LLM to tell it about the kind of task you want to do. So you should always start here and you should expect it to complete further instances. You can make up words like what, poo, and fart, uddle here. And it's very fun to do this sometime. You might consider doing something like this in your demo today. What we're talking about here is we're talking about a spectrum, though. We're talking about how much task-specific data do we need. I was talking to an enterprise yesterday, and he said, well, I was sho to an Enterprise yesterday and he said well I was shoving as many examples as I could into the prompt and then they finally increased the context window size and I could shove more in but I still can't shove them all in it's like you're on the path guy you're on the path because this is the path zero shot to one shot to few shot where does it lead well inevitably it leads to once you get to thousands of examples, it leads to traditional fine tuning. This is a spectrum that ends in some measure, kind of where we began the day today, right? And so what we're talking about when we're talking about in-context learning is this is just, of course, prompt engineering. When we talk about prompt engineering, the most important thing is to be clear and specific in your instructions. It's to give it a specific context, a place from which to stand, very clear rules that you want it to follow. If you don't know the answer, don't say, I don't know. And context also includes in RAG systems, things like reference material that we found. So as we give input, we want to provide instruction, zero shot, an example, one shot, maybe 10, 100 examples, few shot. And we want to give really well thought through examples often too. If we want to build agentic systems that are much more interesting and connected to many other things, it's often very, very important to specify the output. Shout out to OpenAI's recently released JSON mode. This is something that's been up and coming in the space for quite some time. So when we talk about taking this to the next level, you want to make sure that you want to give really good examples that have that chain of reasoning within them. This is the idea of chain of thought prompting. We're not going to dig too deep on this, nor are we going to dig too deep on the idea of moving from standard to chain of thought prompting through this idea of self-refinement. But I would encourage everybody to check out these papers. This is the idea of, yo, model, can you do this? It's like, yeah, sure, Greg, here it is. And it's like, I don't know. Can you tell me exactly how you got that answer? And this is the simple way we can interact to improve our ability to prompt in the future. So what have we got? We've got RAG is dense vector retrieval plus in-context learning. Looks something like this. And in conclusion, we can use RAG to find references, stuff them into the prompt, augment the prompt, generate better outputs. Components include vector store, aka an index, aka a vector database. They include embedding models and LLMs and they include doing some prompt templating, aka prompt engineering. And with that, we're ready to move on to the next piece of our chativersary content, how to build with Langchain.
Session 2: Retrieval Augmented Generation (RAG), An Overview
1,278
AI Makerspace
20231204
What you'll learn in this session: - What is RAG, exactly, and why do we need it? - RAG = Question Answering over Documents - Retrieval Augmented Generation = Dense Vector Retrieval + In-Context Learning Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Apply for one of our AI Engineering Courses today! https://www.aimakerspace.io/cohorts
2024-06-13T22:26:43.164746
https://www.youtube.com/watch?v=vP2JSAZLnRk
to this is going to be actually building the chat application. So this is where we put our front end on. This is where we deal with the web framework. And this is where Chainlit comes in. We like Chainlit, as we've talked about. It's got a really, really nice interface. It allows us to ask questions, to look at chains, to look at when we start adding rag to it. We can look at specific where we got reference materials. We can change prompts, look at prompts, play around with them. This is that prompt playground. We can look through the thinking, the chain of thought reasoning, and it integrates very, very well with any sort of infrastructure tool stack that we might pick up. So Chainlit's great. And it's got a very simple Python syntax. It's really, really straightforward for, you know, people that are familiar with web frameworks already. And it's pretty easy to pick up for folks that aren't. There's no way, the only way out is through with web frameworks, really. So we're going to have a couple additional things to do here, like starting the chat. And then when we get a message, what do we do? So you'll notice some of these aspects within the code. And you'll need to connect some pieces here. You guys will need to set up your OpenAI API key, just like you had to sign up for ChatGPT+. You'll need an OpenAI API key to be able to build applications with OpenAI's API. This making sure you have access to things is going to be able to build applications with OpenAI's API. And so this sort of making sure you have access to things is going to be a key component of really developing any application moving forward in your career. So when it comes to building, that's sort of putting the application front end over top of the OpenAI API to allow us to interact with it the same way as ChatGPT. When we talk about sharing this bad boy, we want a public URL that we can share with our friends. We can text it to our mom, our sister, our son. We can text it to anybody we want and be like, check out this sweet AI app that I built. This is going to be that big wow moment for you guys. This is where Hugging Face Spaces comes in. And creating your own space is very, very easy, as Chris will show. We're going to use the approach of the Docker container, and you'll see it actually builds the Docker container for us. It's freaking awesome. So what we're going to do is we're going to pass it right over to Wiz. We want to maximize the time that you guys have to work together today to show you the step-by-step walkthrough of how to put Chainlit onto that OpenAI API, wrap that bad boy up in a container, and then ship it and share it with Hugging Face. Wiz, show them a little bit about how to do all of this. And let's go ahead and focus a little bit on the chainlet piece that we know people are going to continue struggling with. The last thing I'll say is you guys have to sort of do this same thing over and over and over again in this course. The code behind the scenes isn't the code anyone cares about. It's the code people interact with. It's the app people interact with. And this is what gets you there. This is going to be a game changer for you if you learn it in, out, up, and down whiz over to you man hey thanks greg okay so let's uh let's go ahead and we're first just gonna jump into vs code uh because well it's that's where the code lives gotta go where the code lives so here we are in vs code you'll notice that we have our ai engineering repository on the left-hand side here. And we can go to day two in week one, and we can see that there's a bunch of different files. There are only a few files we truly care about here. The first is the app.py, which is where the Chainlit application lives. We have a Chainlit MD, which is like a readme, but for chainlet. And we have, of course, our Docker file, which is how we create our container. And the container is basically the best thing. And then the last piece is just kind of standard repo stuff. So let's look at the app.py and let's kind of talk about what's happening in Chainlit here. So number one, we've just got a pile of imports, classic top of every application in Python is just a top or just a massive set of imports. Our load.env is useful if we're going to be running this file locally and we've populated our dot env you'll notice that this is just a dot m sample so you'd have to copy this file and then you'd have to paste the file and then we want to rename it right to our just dot env and then we can put our open ai key that's going to load it, that's going to be great. But for now we're just gonna, you know, not worry about that so much. The next step is we have our system template or user template. These are basically just, you know, pieces of strings that we can format later on. It looks like we got a question from Rajesh that I'll try to answer quickly as we go through. it looks like we got a question from uh rajesh uh that i'll try to answer quickly as we go through yeah so a quick question uh uh chris so is if if he uh i mean what does chainlet give you compared to let's say just doing it uh completely manually uh with talking to the open ai api uh we'll talk about as we go through so i'll keep that question in mind but uh we'll we'll discuss that as we kind of go through uh so we have our user template user template is uh exactly what we had before in our our collab notebook and that is all that's doing is it's uh you know telling us it's basically just adding the chain of thought step, to be honest with you. But the idea is that we have some user input and then we tack on that chain of thought so that we get a well thought out answer. And then we get to some first chain lit. Now, if you're not familiar with Python decorators, this is going to be a crash course that we're going to talk about very, very, very, very quickly. Please come to office hours. If you have more questions, the idea is a decorators like a function that takes another function, right? And it's going to do stuff. And it's going to use that other function in, in some way, right? Like this, this whole async def start chat. This is a function that's going to be wrapped by all this. Now, the way that chain lit works is you can think of, there's a number of different steps that Chainlit is going to go through every time we use this application. And at different points or different steps along the way, we can kind of inject our own information, right? So we can say this onChainStart decorator is going to add our own information, right? So we can say this on chain start decorator is going to add this extra information when the chat is starting up, and it's gonna be able to leverage that to do some things that we desire. But the idea here is all this decorator is, is it's basically just saying, hey, you know, whatever function follows this, I'm going to use that for my behavior. It's a little bit cleaner to see with onMessage. Basically, this is going to say, you know, onMessage, whenever we get a message, we're going to do all this stuff that's in this function, right? And that's it. That's all we do. stuff that's that's that's in this function right and that's it that's all we do um the basics of this are we're going to get a message and that's going to be a message and i know it's so you know the terminology is all the same word over and over again but the idea is the message is a chainlet message which is a particular object right and all we're going to do is stuff and then we're going to return something and the thing that we return is going to eventually be what the response is right that's it so now we do stuff which is a lot but it's not too crazy right it's in fact exactly what we saw before in the notebook we have a client and we use that client to interact with OpenAI. Now we're using this asynchronous client instead of just the client. If you have questions about what async means, please come to office hours. I'm not gonna spend time talking about it now, but it's a way that we can make sure we keep things responsive and we're not waiting around for things to finish before we start doing other work. And then this is just going to print it to our terminal so we can see it. Totally unnecessary step, but it's good to look at. And then inside, again, this is going to happen every time the user sends us a message. Every time they send us a message, all this stuff is going to happen in the background. So we open up our client and then we're going to create a prompt object now the prompt object is uh all this is doing is it's letting chainlit see more about this prompt so we can populate more information in its in its front end for us right that's it uh it's just like we're gonna package this in a certain way so that chainlit knows how to properly display it and knows how to let us know what's happening. Then all we're going to do is we're going to build an empty message that will eventually send back to the front end. This is what the user is going to see, right? And then we just straight up call the OpenAI async endpoint. We're going to use a lot of Python code. Basically, all this Python code is saying is, hey, for each, you know, for our input, which is going to be exactly what we have here, you know, send that input to OpenAI in the way it expects, which is in a list of different objects that have a role in some content. We're going to send all of those, and we're going to expect a streamed response. So you might notice that, you know, whenever you're talking to like Chad GPT, you're going to get like, it gives you word by word, token by token, right? All this streaming thing is doing is it's making sure that we just get like a bunch of tokens in a row, as opposed to needing to wait for it to generate every token, right? And then send the full block of text. Because you can imagine, right, if you're using Chad GPT, and it responds in a very long message, if you had to just sit there waiting for all that text to be generated, and then it showed up, it would not feel nearly as good as it would if it started showing you the tokens right away, right? Oftentimes, they're even generating faster than you can read. So it's all good. That's all this code does. So basically all it's saying is every time we get a message, that's all this on message does, right? Every time we get a message, we're going to get our open AI client. We're going to send our users input to the open AI client, and we're going to get a stream of tokens as a response. And every time we get a token from the OpenAI endpoint, we're going to send it to the user so they can see it in that chainlit UI. That's all that's happening, right? Like, it is a lot, but that's all that it's doing. We get the message from the user. We send that message along with some formatting that it's doing we get we get the message from the user we send that message along with some formatting that we're doing to create those those roles and to add that chain of thought prompt right and then we get a stream of tokens in response and every time we get a token from open ai we just we just send that right back to the user so they can get those tokens as well that's it these decorators if you can think of it as like interrupting the normal flow right if we didn't have these it would just do its own thing but instead we're going to tell them hey this is uh you know instead of doing what you would normally do do what i want you to do right which is in this case do this accept the user's prompt generate tokens send to user uh streaming tokens as they're generated. No, very, very specifically, the tool, the streaming is as the tokens are generated. We don't have to wait for every token to be generated, which makes it feel very snappy. So that's that. And that's great. And it's cool. And we love it. And it works locally, right? And you can wrap it up in this docker file the docker file i'm not going to go too deep into right now uh we can talk about this a lot more throughout our time together but all this is doing in essence is it saying hey you know uh we want a specific place or a specific environment to run our application. And we need a blueprint, right? We need instructions on how to build that environment. And you can think of that Docker file as a blueprint for your container, right? We're going to say, hey, start with the Python 3.9 foundation. And then we're going to add some user group stuff on top of that because Hugging Face requires it. And once you've done that, you're going to go ahead and if you could just take the requirements.txt I've given you and put it into this folder inside the container. And then inside the container, if you could go ahead and run this command, which is going to install all those requirements, right? Then copy the rest of the stuff I gave to you, and then go ahead and you're going to run this command inside that container. This is all it's doing, right? It's a blueprint. And that blueprint is in a standard format, so that when we send it to a container or we send it to a machine in AWS that has Docker or GCP that has, wherever has Docker, it knows how to read this blueprint, and it's going to be able to create that container for us. That's all the Docker file is doing. It's just a glorified blueprint. It's, it might seem confusing when you first start using it, but as you get more reps, you're going to, you're going to be able to really, you know, get a firm grasp of like, it's just a, just a list of instructions, just like any other kind of algorithm. Just a list you follow from top to bottom that tells whatever's building that container how to do so. So that's Chainlit in code. Looks dope. Now we want to be able to push this thing to the hub, right? So in the instructions, you're going to find that we're going to wind up with this local directory called beyond chat GPT. So I'm going to go ahead and open my terminal so we can get the terminal view here, right? And if we go to beyond chat GPT, right, what we're going to do, instead of kind of fiddling with files and all this other stuff, which is like, you know, it's kind of maybe it's not so fun. You know, it's hard to keep track of. What we're going to do instead is we're going to do something where we add our Hugging Face space as a remote, and we just push this pile of files to Hugging Face, right? And the way that we add the remote is just with get remote add, and then you name it. So we're going to name it hfrepo. Then you provide the address. Hugging Face gives it to you, and I'll show you an example of this in just a second. But the idea is very straightforwardly, we add it as a remote. We're going to make sure everything's up to speed by using a command that you'll find in the repo. So I'm not going to go through showing it to you now, but you can just copy and paste it in. Basically. We just wanna make sure that the hugging face remote server is synced up to ours. And then you can just push those, uh, push those files to hugging face. Right. And that's all you have to do. We have this local environment, and then we push all those files, including our Docker file, including our requirements.txt, including our Chainlit application. We push those to our remote server, which is our repository on Huggyface, and it's going to know what to do from there. So I'm going to swap over my screen for a second so we can see the huggy face side. And here we go. I'm going to go ahead and change this to here. Perfect. Okay. So this is what the chainlet application looks like once we're done. Just show you an example of how it works. How can I assess you today? How do i make really good lasagna my dude right we can send that message and that message is going to you'll notice it's all these tokens are coming in a big stream right you don't have to wait for that to finish and then we get one big chunk attacks instead we get rapid fire tokens right feels very responsive thanks chainlit and this is the idea that's all we're doing right if we look at the files that we have on this huggy face repository you'll see right their one-to-one match with what we were just looking at we have our docker file or readme or app.py or chainlit.md and our requirements.txt that's it right and it can use this blueprint of the docker file in order to build that container and run our application which we have here that's the that's the uh the basic idea the all you have to do basically is push this to hugging face but where do you push it is a great question and so what we'll do really quickly is we'll just look at how we would do that. So when we are in our, when we're on our profile, right? So I'll go to my profile. So it's exactly the same as what you'll see, right? And what we want to do is we want to create a new space. So we're going to go to our Hugging Face spaces, which is a new space here. Click the button. We're going to name it whatever we want. Like, you know, first LLM, let's go. And then I don't think we can have exclamation mark. We'll set a license. It's up to you what you want to do. This code's MIT, so you can do whatever you want. We click Docker. It's blank because we're bringing our own Docker file. Everything else is the same. We hit create space. And that's where we get this link. And this link is what we can use to add this space as a remote server to our Git repo that we'll then push to. Again, this is all spelled out in the instructions, but we just want to see it. And once you've pushed it, you it's hands off. The only thing you have left to do is to go to the settings, which you can access through the kebab menu and add your secret, right? We need it. We need a secret. Uh, so we'll have new secret. It's gotta be your open AI API key, right? This is it. And then you paste it on in. The only tip I'll give you here is definitely make sure there's no extra new line characters or spaces. It will complain at you and it won't work. So make sure it's just the string. That's the key. Once you've done that, you'll be able to access your Chainlit application just as though you were running it locally on Docker. Um, and, and that's it. Uh, that's the whole process. Obviously this was a bit quick guys. We're gonna spend more time talking about chain lit as we, as we progress forward, uh, and we'll spend more time talking about some ins and outs of Huggy face, but for right now, the idea is once we have it on our machine, we just need to put it into our Huggyface space, and we'll be able to enjoy it. And our chainlet application is just regular Python with some extra widgets, which is those decorators, which essentially let us interrupt the normal behavior of the chainlet application. And that's it, right? Again, just 15 or 16 simple steps, and you're away.
🧑‍💻🚀 Deploy Your First LLM Application with OpenAI, Chainlit, Docker, and Hugging Face
1,205
AI Makerspace
20240320
Take a peek into our AI Engineering Bootcamp, Cohort 1! This is a deep-dive clip from Session 2 on building and deploying your first LLM application. Chris digs into the details you need to know about building a Chainlit application with industry-standard best-practice tools out at the open-source edge! Check out the full GitHub repo here: https://github.com/AI-Maker-Space/Beyond-ChatGPT
2024-06-13T22:30:11.567715
https://www.youtube.com/live/pRbbZcL0NMI?si=8I0cJSHm-wpkP7Sl
Hi everyone and welcome to Beyond Chat GBT Build Your First LLM Application brought to you by AI Makerspace and 4thBrain. My name is Greg Lockman and I'm the founder and CEO of AI Makerspace and a long-time instructor at 4thBrain. We appreciate you taking the time to join us for today's event. Please share in the chat where you're tuning in from today. We're so happy to have folks across our communities joining us from all over the world. During our event today, you'll learn how to build, ship, and share your very first LLM web application using the simplest stack of all, OpenAI's API, Chainlit, and HuggingFace. By inserting an LLM in between a front-end user interface and a super easy back-end serving and inference solution, you'll be amazed at how quickly you'll be off to the races building your very own LLM applicationsm applications all right let's get into it if you hear anything during our event today that prompts a question please follow the slido link in the description box on the youtube page we'll do our best to answer the most upvoted questions during the q a portion of today's event that, I'm pumped to welcome my good friend, colleague, and the LLM wizard himself, Chris Oleksiuk, to the stage. My man will be working as a team today to deliver this Code Plus Concepts lesson. Chris is the head of LLMs at AI Makerspace and is also a long-time fourth brain instructor. As an experienced online teacher, curriculum developer and YouTube creator, he's always learning, building, shipping and sharing his work. Chris, you ready to show him how to do some build, ship, share? You know it, Greg. All right, let's go. See you back on stage in just a second, Chris. All right, here we go. So today, we're going to sort of walk you through an end to end solution. And that's what we're going to show first. Chris is really going to kind of walk us through exactly where we can expect to be by the end of today's lesson. by the end of today's lesson. We're going to dig into each piece of the puzzle we need to get there. We're gonna talk about the actual developer syntax that you need to know if you're gonna interact with the GPT models as an LLM application builder. Then we also need to understand if we're going to put a front end on a chatbot or virtual assistant, how we can leverage a tool like chainlet and what kind of syntax we need, what is the structure and how does OpenAI's API really fit into that. Finally, we need to go ahead and deploy this thing, make it public and viewable and shareable by everyone on Hugging Face. We're going to do this through a very easy and quick Dockerized solution, where we're going to package up our chain-let application that's got our OpenAI connected to it, and we're going to make that available to everyone. Finally, we'll show you exactly how to sort of iterate and dial in and optimize your prompts once you're live and you're working with your application on Hugging Face. In the end, we'll take any questions that you have, and we hope to continue the discussion towards more and more advanced and more and more complex LLM applications. So first up, we're going to see what this thing looks like completely end to end with all these tools combined. So Chris, over to you. Thanks, Greg. Yeah, I mean, the idea here is that we're building ourselves a chat application that we can interact with through this UI. So we can ask questions like, you know, what is the difference between Langchain and Llama Index? And we can see the LLM processing our prompt. We can look at the response and see, you know, we have this step-by-step breakdown. It gives us some outlines of laying chain and Lama index. It doesn't actually know what these things are. And so it is a, you know, it's hallucinating quite a bit here. But the idea is that we have this chat application and this is in a hugging face space. So this is publicly available. You can go to this link and use this tool today. And, uh, you know, we have the ability to do a whole lot of very powerful things through the chain lit interface. We can check out our prompt playground, which we'll spend a little bit more time talking about later. We have a history of messages we can look at. And, you know, the thing that's running this in the background is OpenAI's GPT 3.5 Turbo model. And so we will go through the whole process of how we got from our first notebook to this app. And we'll take you through that whole course over the course of the rest of our presentation. But with that, we'll kick it back to Greg. Awesome. Thanks, Chris. Yeah. So let's go ahead and dive in and get started you know what we're really saying here is we're saying that the way that most people interact with llms is the not really the way we want to think about interacting with llms as developers and builders we can ask questions simple text in text out of course everybody is doing this today but what we want to do is we want to take it to the next level. We want to sort of figure out this first AI engineering Lego block, if you will. This first piece to our puzzle is this OpenAI API. This is going to get us moving down the path of becoming a truly state-of-the-art machine learning engineer today. So rather than text in, text out, what we want to do is we want to start with something called the chat style model syntax instead. And so rather than text in, text out, the chat style model syntax is going to leverage an list of chat messages in and then we're going to be able to output an additional chat message so each time we're interacting with the API we're not simply putting in new text but rather we're putting in a whole list of messages and this might start off as one message and then grow as we have a chat we're not simply putting in new text but rather we're putting in a whole list of messages and this might start off as one message and then grow as we have a chat but we also might want to include messages for lots of other reasons as well what kind of reasons might we want to include other messages well we can gain some insight into this by looking at the types of messages we'll want to include in our inputs. These include three specific types of roles. The system role, the user role, and the assistant role. If we look at this example on this slide as a first cut, we can say that, you know, the system role is saying, OK, you are a helpful assistant. The user role is prompting who won the World Series in 2020. And the assistant role is answering the question, the Los Angeles Dodgers won the World Series in 2020. in 2020. And you can see that subsequently the user then asks a follow-up question, where was it, referring to the World Series in 2020, played? So we can start to get an idea of these roles by looking at this example, but diving a bit deeper on what these roles mean is really the way to think about the system role is this is sort of the context. It's the specific instructions that are guiding the LLM and setting our real objectives for what we want to get out of the LLM. The user is sort of, you can think about it from the user's perspective. It's simply interacting with the application so you're providing an input and you want to get some sort of output the user is leveraging the application the user while you're building that's just you but when you're finished building and you're ready to share that's actually your user and that's an important distinction when we think about building these things out ourselves. When we go to the assistant role, that's sort of the assistant perspective. This is where you are now acting like the virtual assistant, or in some cases, it's just simply the AI or the LLM responding to you as an AI, as an assistant. So let's see how this works in the simplest possible case, which is I'm simply prompting the LLM very, very similarly to the user role only. So I'm just engaging with the LLM. And this is going to be a very simple syntax here. You're defining the role. you're using a prompt similar to the way you would with chat GPT what you get out is much more interesting here what you get out is a bunch of information but sticking with the idea of roles here we not only get the output response in the content area, as we can see right here, but we also get the role assigned to the response of assistant. In this case, the assistant is actually answering our question. And so this is the first cut way to think about engaging with the OpenAI API. The next layer of this is gonna be when we add system prompts to provide some additional context to our message and our chat message list. And for this, we're gonna go back over to Chris to show us what it looks like in code. Chris. Thanks so much, Greg. Yes, we're gonna be checking out how to interact with the OpenAI suite of models programmatically. So we're going to use Python, and we're going to do this in a Colab notebook that you can access if you want it from the – there should be a link in the chat. chat. The idea here is that we're going to use the OpenAI Python library in order to send requests to those endpoints and get responses just like we saw Greg demonstrate. The first thing we'll need to do is set our OpenAI API key. If you're stuck on how to get that, you can follow the instructions at this repository link where we go through in detail how to get a API key. We're going to talk about kind of this idea of chat completion with these three roles. The idea here is that we can act as these three entities when we're sending prompts to the LLM. And we want to leverage that to do things like, you know, set up different patterns or ask the LLM to act or behave in certain ways. So let's take a peek at how we might do that. First things first, let's just send that prompt we saw in the slides. So we have our string, which is what is the difference between Langchain and Lama index. We are going to use the OpenAI chat completion endpoint, and we're going to create a request. The model is going to be GPT 3.5 Turbo. And then our messages is going to be a list. and we're going to send it just one item in that list, which is going to have the role user, and then our prompt, which is what is the difference between Langchain and Lama index. We can look at that big blob that Greg showed us in the slides, where we can see which exact model it used, the choices we have in terms of our response, the reason that response finished, and then our usage. So we can do things like track, very fine-tuned track our token expenditure, average tokens per request, and all kinds of different fun things. There's a ton of information though, and we have to set up these dicts that have a role in our content. So what we're gonna do is just build some simple helper functions to both kind of get us in the idea of using these kinds of functions to set up our prompts, as well as make things display a little bit nicer, we're going to have a pretty print method that lets us actually display our results in Markdown in the output cell. So we can look at the results as we're hoping to see them. You'll notice that each of the system prompt, the system prompt, and user prompt just wrap our message in the respective role. So let's give this a shot and see if it works. We're going to pass our list of prompts, which in this case is just a user prompt. It's the same one as before, and we can see here that we have a response. We're no longer printing that big blob, So we're just looking at that text. Now we're going to focus on modifying this a little bit. So we're going to add a system message as well. The way you can think about that system message is like an overarching idea that you want the result to adhere to, like an instruction or a way that it should behave or act. So you can see we have our list of prompts, which we're going to send to our completion endpoint. And we have a system prompt, which is you are irate and extremely hungry. And then our user prompt with, do you prefer crushed ice or cubed ice? And then we can get our response. And we can see, you know, it's pretty angry. I don't give a damn about ice right now. I'm starving. What's even the point of asking such a ridiculous question when I'm practically about to pass out from hunger? Just give me some food already. Very, this response sounds both irate and extremely hungry. And this is the idea of the system prompt. So let's just change the system prompt. As you can see, we're only modifying the system prompt in our list of prompts. And let's change it to, you are joyful and having an awesome day. And you can see here that the response is very different, which is I absolutely loved crushed ice. It's perfect for keeping my drinks extra cool and refreshing. It always adds that little bit of joy to my day. The idea here is that with just changing our system prompt, we can really control how that output acts and the kind of tone and delivery that it uses, which is a powerful feature. We can, of course, still access the response object and look at our completion, look at the number of tokens we used, but, again, we're just printing it in this pretty format so that we can see it the way we'd expect to see it if we were in our chat application. And with that, we will kick it back to Greg to learn what more we can do with prompting. Yeah, thanks, Chris. Super interesting to see how if we're telling the LLM that it's hungry or if we're telling it that it's joyful, that really shines through in the response. And it begs the question, like, what is the role of prompting and prompt engineering here? And in order to really contextualize this properly, we should kind of return to the basic best practices or rules of prompting. Number one, we always want to be clear and specific in the instructions that we're providing. We really want to guide the output as we did with the mood or the feeling in the last example. We also wanna be providing that context, that role, that persona, a place to stand. You know, we saw earlier, you are a helpful assistant. You know, you are an extremely astute educator. You are X, Y, Z. You really want to give it a place from where it should be providing that answer. And we saw that in this last case as well. We also, in the format we were asking for out we didn't we didn't do this explicitly but we sort of did a two-step to make sure that the output was in sort of pretty print format. So you know the way that our user kind of sees and can engage with this we're hitting a number of the top rules of prompting and prompt engineering that we always want to keep in mind. The last one that we're really not hitting yet, but that it's time to talk about, is really dialing in the input through few-shot prompting or through chain of thought prompting. And it's worthwhile to just sort of review what these are in general. If we look at kind of the paper that each of these ideas came from, the paper for few-shot prompting was called Language Models are Few-Shot Learners, very famous paper from the team over at OpenAI. Then what we see in this example that they sort of started with is they're teaching the language model new words a what poo is a small furry animal native to Tanzania an example of a sentence that uses the word what poo is we're traveling in Africa and we saw these very cute what poos that's our example We got our input, we provided an output. Now we set it up so that's our one-shot example and we ask for a subsequent output with just this input. To do a far-duttle means to jump up and down really fast. An example of a sentence that uses the word far-duttle is. So here we have sort of a one-shot or let's say few-shot prompting example. You can provide one shot, two shot, few shot, many, many shots. And this is really a key aspect to dialing in the output of your applications. If we take it to the next level, we can not only provide an input example and output example response, but if we provide an input example and output example response but if we provide an input example and output example response we might want to in the output example response give some really key details on why that's our response so in this case you know the standard prompt gives the example roger has five tennis. He buys two more cans of tennis balls. Each can has three tennis balls. How many tennis balls does he have now? The answer is 11. And this is from the original paper that brought up this idea of chain of thought prompting elicits reasoning in large language models that totally would encourage all of you to check out as well. The way to take this initial example, this one-shot prompt example to the next level, is rather than just say the answer is 11, say Roger starts with five balls, two cans of three tennis balls, each is six tennis balls, five plus six is 11. Therefore, the answer is 11. So by providing this sort of idea to have the model think through step by step, what the reasoning is to get it to that answer, then the few shot prompting can be a much more powerful strategy. The way this works in the OpenAI framework with the system, user, and assistant syntax is what we're going to see in code next. From Chris, back to you, my friend. Thanks, Greg. Yeah. So, let's explore how we would implement a few shot example. We're going to use basically the same example that you can see in the paper. Language models are few shot learners. The idea here is that we can use that assistant role to kind of pretend to be the output of the LLM. And that's going to let us really cleanly set up these few shot examples where we show similar to like a conversation history, right? We show the user's initial query, and then we show a response, and then we show another query, and then we let the model use that previous chat history, you can think of it, to inform its answer. So let's just, again, we're just going to use some nonsense words. So we have the prompt, please use the words Stimple and Falbean in a sentence. You can see this is our list of prompts. So we're going to pass this and get a response. ChadGBT, or in this case, GBT 3.5 Turbo is absolutely not sure what those words mean and so refuses to do what we ask. Though technically this is using those words in a sentence. Good job on a technicality there, GPT 3.5 Turbo. But we can use the few shot prompt pattern to actually, you know, effectively teach it these new nonsense words. So we have our user prompt now, something that is Stimple is said to be good, well-functioning, and high quality. An example of a sentence that uses the word Stimple is, and then we input, we kind of, you know, disguise ourselves as the LLM here, so the assistant prompt, and give a response, boy, that there is a Stimple drill. We then set up another query, which is a Fallbean is a tool used to fasten, tighten, or otherwise is a thing that rotates slash spins. An example of a sentence that uses the words Stimple and Fallbean is, and then we're going to allow the chat GPT 3.5 turbo to actually complete this by offering its own assistant prompt. And you can see that we get back a sentence. Wow, this simple fall bean is a game changer in the construction industry. And, you know, if you look at what we assign the definitions to be, this sentence makes absolute sense. assign the definitions to me, the sentence makes absolute sense. And while this is obviously kind of a playful example, it's a very powerful pattern in terms of getting your outputs to match to a specific, you know, either format or to inject some kind of novel information or allow the language model to come up with better results, you know, showcasing a pattern to it. Absolutely powerful technique. Next up, we have our chain of thought prompting. Chain of thought prompting is a very powerful technique because it lets the model take time, as it were, to think through its responses, as Greg was explaining. So we have this reasoning problem, right? Billy wants to get home from San Fran before 7 p.m. Eastern Daylight Time. It's currently 1 p.m. local time. And then we have some options of travel. So we have a fly option and we have a bus option. And then we have a teleporter and bus option. The idea here is that, you know, we're tricking it, right? It's currently 1 p.m. local time is an attempt to trick the model. So though it's 1 p.m. in Pacific time, it's actually much later, Eastern Daylight. So we're going to see if it notices, if it picks up on that. And the answer is it doesn't, if we just leave the prompt in that normal format. But if we add just this string to the end of our prompt, which is think through your response step by step, string to the end of our prompt, which is think through your response step by step, you can see that the answer, it gets it correct, which is that, you know, it does matter which travel option Billy chooses. And it showcases the entire logical flow, including this conversion between, you know, the two time zones. And so this is a very simple pattern that is extremely powerful to allow your language model time to think through and develop, you know, quality responses. And with those two demonstrations out of the way, we'll kick it back to Greg and learn what we're going to do with this. Yeah, thanks, Chris. So the prompting strategies that we just discussed are going to be essential to really making it easy for users to interact with our application. All of this together forms what we can call a prompt template, and that's going to really be a key piece of the way we build our application for our users. When it comes to building a chat application, this really is the piece that's going to give us that front-end developer role and vibe. You know, we can, with just some simple Python code, we can create very, very powerful UIs. As we've seen, we're going to show you exactly how to do it now with a tool called Chainlit. Chainlit is really cool for a bunch of reasons. Number one is that it's crazy easy to implement in Python. And that's where we're going to start today. Number two is that we can use what's called their prompt playground. That's a really, really robust GUI and set of tools for us to dial in our prompt and our strategies for prompting. So what we can do is we can look at all the different settings on the OpenAI API. We can mess with our user, assistant, and system roles. We can provide one-shot, two-shot, few-shot. We can add chain of thought reasoning. There's lots of great things we can do within the Prompt Playground. Now, the additional thing that's great within Chainlit, that's a bit beyond the scope of today, but look for in future lessons from us is the chain of thought, reasoning, GUI based experience that is provided by Chainlet when you're actually building with some of the more complex infrastructure tools that Chainlet has integrations with like Langchain, Llama Index, and even now this week, Haystack. so today we're going to focus on the simple Python code and just the prompt playground to get started with showing you how easy this thing is to get up and running you can do this in real time with us if you open your terminal now Chris is going to show us how simple the python syntax is to get this thing up and going and really just to get something you can start interacting with before we layer in our application logic. So Chris, Chainlit hello world time over to you. Yes, thank you, Greg. So the idea here is straightforward. We're just going to go through how we can start using Chainlit and a little bit about what it can do. You'll notice that the command I typed is pip install chainlit. This is just to install the chainlit package in my environment. Now that I have it installed, I can run chainlit hello world, and that's going to start up a process, which we're gonna be able to look at in VS Code. You can see here that the interface is exactly what we saw in our hugging face space. We're going to do a brief tour just to make sure that we're all, you know, on the same page about what this is doing. So first of all, we obviously have our chatbot. We have whatever it's going to say, and we have a big text box we can type something into. So let's type hello and send that. You'll notice that it immediately responds with just some boilerplate. Again, this is a hello world example. It's just meant to let the system run, and you can see it's working, and we can start building our own Chainlit apps. That's fantastic. We'll also take note of a few other important features. We have a history of messages that we've sent through the Chainlit application. As well, we have a readme, which we can set up to explain about what our application is doing or any other important details we might want to include in a readme. We can start a new chat with the new chat button. We can change our settings from day mode to night mode. And we did all of this with just Chainlit. This is just Chainlit running by itself. We haven't modified anything. We haven't written any code. This is it. It's a very powerful, you know, front end for our application that we're going to modify in the coming steps. But before we do that, I'll pass it back to Greg and we can hear what exactly we're about to do. Awesome, Chris. Yeah. So we're going to take this really powerful front end that basically is super easy to get up and running and we're going to layer in some logic to this because you may be asking yourself was it really that easy to get our first llm app going and it's pretty easy but it's not that easy there are a couple other steps that we need to integrate here. Number one is we're going to have to get our imports. Number two is we're going to set up our prompt templates. That's what we just talked about with the system and user and assistant roles. And Chris is going to show you exactly what that looks like in code. But the key pieces of our chainlet application are going to be two decorator functions or two sort of meta functions in Python. One of them is going to be called on chat start. So as we're beginning the chat in this case within on chat start, we're going to set up our open API API that includes factors like temperature and the max number of tokens that we're willing to output. And we're also going to now begin our list of chat messages. And when we get to the on message decorator function within our chainlet application, we're going to actually call the LLM. That's where we're kind of doing the thing that most users who interact with ChatGPT, they're simply doing. This is where we're actually leveraging the LLM. This is important to keep in mind, especially as you build more and more complex LLM applications, because this actual calling of the LLM comes in much later than many people think as you're building some of these more complex retrieval applications and other applications with the integration tools that we discussed earlier, Langchain, Lama Index, Haystack. So once we set up our app.py file, what we're going to do is we're going to be ready to ship and share this thing. And we're going to do that. We're going to actually deploy this thing on Hugging Face. And once it's deployed, we're going to be able to play with the prompt playground and iterate on our prompts directly within the Chainlit application. So we're going to deploy using a Hugging Face space. These are some of the most famous spaces that you've probably seen up on the Hugging Face community right now. What we're going to do is we're going to create a similar application, just like the one we saw Chris show early on in the demo today. The way we're gonna do it is we're actually gonna leverage a Docker file approach. Now, the Docker file that you need, we have ready for you in the repo that we'll share in the chat with you now. It's in the beyond chat GPT repo. You don't really need to worry about the details of the Docker file, and you don't really need to worry about the details of Docker in general in order to get this thing up and running. The big idea is it's simply a way to package up our application and our front end in a nice portable box that we're then shipping into our Hugging Face space. Once we have our Docker file, the only other piece we need to add here is we need to add our OpenAI API key, and we're ready to go. We can ship this thing and actually interact with it and engage with it live. We can share this with our brother, our sister, our boss, our community. We can actually share this app that we just built. So this is really where the magic happens. One last step we're going to take in this final piece of the demo is once we have our application up and running, we can use a very, very powerful tool within Chainlit called the prompt playground to really dial in our template. As you see here on the screen, our system user and assistant roles here. And we can also change the settings on the open AI API. As you can see in the right-hand side of the panel, all this is again, without any Langchain, Lama Index or other infrastructure tools. And this is done very easily by clicking this little edit button within the chat as Chris will show. So with that, I'm gonna kick it back over to Chris to show us how to containerize, deploy, chat and optimize, bringing it all together. Chris. Thank you, Greg. Yes, so as you can can see we're sharing the whole screen here. We've got both our browser and our VS Code open so we're about to put this all together. First things first, in the repository we're going to find an app.py file. This is going to be the core logic for our application. So we'll notice a few things. First of all, we have a lot of imports. Some of these imports should hopefully seem fairly self-explanatory. We need OpenAI. We need Chainlit. Makes sense. But you'll also notice that we have this prompt and prompt message object from chainlet as well as a chat open AI tool again from chainlet. The first thing we're going to do, as Greg said, is set up our system templates. This is basically our system message. So this is what we're going to use our system message. You can edit this however you'd like, add whatever flavor or context you wish. But this is where that magic is going to happen. The next thing we're going to do is set up a user template, which is going to accept some user input. And then we're going to add that chain of thought prompt to the end so that we get those quality, well thought through answers that we expect. Now, like Greg said, there are two main decorators we're going to be using in Chainlit. The first, and a decorator is just like a way to talk about, you know, something that you can wrap your function in that's going to change what it does or let it interact with a different system. In this case, we're going to use it to interact with the chainlet front end that we saw earlier. So you can see we have this on chat start decorator where we have our settings. This is the model settings we're going to be using today. These are all the different things that you can change. the model settings we're going to be using today. These are all the different things that you can change. You'll notice that we're using a temperature of 0, which means that it's going to be fairly non-creative responses. We have a max tokens of 500. We're using a top P of 1 with no frequency or presence penalty. All of these parameters influence the kinds of generations that we're going to get from our model. So it's very important to play around with these, experiment, and discover what's best for your application. Once we set those settings up, we're going to store them in our user session. This is a session state that persists between different components of our chainlet application. It's important to use this session, user session, to pass information between these various decorators and their functions. Next up, we have our onMessage decorator, which is going to trigger on a message. Right? Makes sense. We're going to collect our settings from our user session, and then we're going to set up this prompt. Now, in Chainlit, a prompt is exactly what you would think it is, right? We have basically, it's a way to collect text in a way that lets it interact properly with the Playground as well as OpenAI. You'll notice that we have our provider, and our provider is ChatOpenAI. This is how we determine which Playground to use. Then we have our messages. This should seem familiar. We have our prompt message with the role of system, where we're going to pass our system template into our formatted prompt. And then we have a prompt message with the role user. And this user role is going to use the formatted prompt of the user template, but we're going to add the input to that template using the dot format method. So the idea here is that we have this template above, which includes this formatable component so that we can inject our user's message into that component through this prompt message. We're going to specify that our input is the message. I'm very sorry to click on these variables. The input is the message that we're going to get from our user, and then we're going to pass our settings. Next up, we're going to set up just the streaming half of Chainlit. I won't go too deep into it. Suffice to say that it lets us see the response flow onto the screen as opposed to waiting and then just dropping a whole blob all at once. You'll also notice that we're using again that OpenAI chat completion method that we saw in our notebook, and we're passing in our messages, our list of prompts. And we're letting the stream setting be set to true. And then we're passing in the settings that we set on chat start. The rest of this is, again, we're going to append our chat history to our prompt. And from there, it's just chainlit boilerplate to send the message back to our user. So let's actually get this thing going. We'll head into GitHub and we're going to rely on this Docker file, which is, you know, again, like Greg mentioned, we've just built this. It's there for you to use. It's there for you to use. You'll notice some additional user permissions, you know, kind of commands. This is because of Hugging Face's specific Docker deployment strategy. But enough of that. We will start with cloning our Beyond Chat GPT repo. We'll cd into that repo. As you can see, if we type get status, we are in this repository now. Now that we're here, we want to add a remote. And the remote we're going to add is a Hugging Face space that we need to create through the Hugging Face UI. And the way to do that is to go to your organization or profile, click new, space. We're going to name this Beyond Chat GPT Demo. We can use whatever license we need to based on the libraries that we have included in our application. So select whichever license is most appropriate for your task. We're going to select the Docker template. We're going to use a blank Docker template. We're going to use the basic hardware, which is free, which is fantastic. And we're going to have this set to public. Once that's done, we can create this space. And now you can see that we can clone this space to get started. We're going to do a slightly different flow, which is adding it as a remote, which we can do here. Now we're going to pull in that remote. We're going to not allow a rebase. So we're going to set the merge strategy for no rebase. And we have to include this allow unrelated histories tag. This is just so that we don't get any complaints from Git when we do this process. Once we've done this, you'll notice that we have some merge conflicts. We can navigate to our readme, which is, in this case, what's going to be the source of those conflicts to accept the current change in both cases. Then we're going to go ahead and save our readme. Now that we've done that, we can head back to our terminal and we're going to git add dot. So we'll just add everything that's in our present repo. Then we're going to git commit dash am releasing app to HF space. Basically, what this is doing is it's just adding and committing any changes as well as adding this message. Now that we've done this, we are good to push our changes to this HF space on the main branch. You'll notice that as soon as I did this, my web browser changed as we're going to start building our actual application. Now, this won't let us use our application because we have to add a secret. The way that we do this is by going to the secrets tab, which can be accessed through the kebab menu, going to settings, and then scrolling down to our secrets and adding a new secret. We're going to add the OpenAI API key, which we can do again by clicking new secrets, pasting it in there. And then I'm going to actually complete this step in another window so I don't expose my OpenAI API key. But the idea here is basically we're going to set up an environment variable so that we can use this application. You'll notice that this is building. Once it's completed its build, we'll be able to use our application the same way that we saw at the beginning of this process in our end-to-end example, which is already loaded and running here. Let's take a look at a prompt. Just say, you know, what is the, let's say, what is four plus five times six. Remember that we're using that chain of thought prompt that we added to our app so that we can leverage that chain of thought style. We also get access to this prompt playground. The prompt playground is a very helpful tool. It lets us see what the system prompt was, what the user prompt was, and what our response was. We can also check and see what exactly it got by clicking the formatted tab. This is going to show us what was formatted and it's highlighted, in fact. We can select a variable in case we had many variables. We can change this variable if we wanted to. And then we have the ability to submit this. We're going to get a new response. And this is going to be something that lets us play around with our prompts. We can also click the little settings button and we can select which model we want to use, the temperature that we want to use, the max tokens, all of those settings that we saw previously. The idea here is that this is a full playground that lets us really, you know, figure out how to best format our prompts. We can do things like removing the actual, you know, chain of thought prompt, and we see that our answer is way less verbose. Anyway, it's a very powerful tool, and it comes built stock into Chainlit. And that's what makes it such an excellent platform for creating, playing with, iterating on and, you know, really, really perfecting these output applications. And with that, we'll send it back to Greg. Wow, Chris, that was awesome, man. Thank you so much for showing us how to containerize, deploy, optimize our prompts. That really brings it all together. And with that, that's your first LLM application. Congratulations. That's all it takes. And we'd love for you to share your first LLM application with us as you move forward and really get going building, shipping, and sharing. In conclusion, prompt engineering is very much still important. It's essential. It's integrated directly into any application that we build. And we need to be very aware of the way that our user is going to interact with our application and we need to be sitting there providing the backstops through our prompt templates making sure that we're giving specific instructions making sure that we're providing context we're doing one shot two shot few shot with chain of thought reasoning and we're doing one shot, two shot, few shot with chain of thought reasoning. And we're providing the details on the kind of output that we want to give within our application. We also have found out that it's really important to get to know chat model syntax or the chat completions model from OpenAI. This is not only important to build with OpenAI tools, it's actually important if you want to build with Langlit, package that up, deploy it on Hugging Face, and you're off. You can now build these applications for your particular context and your particular situation. And so with that, we're going to go ahead and move to the Q&A portion of today's event. Chris, come back up on stage and let's hit it. We're opening up the floor for questions from the audience. If you have questions that have come to mind, please drop them in the Slido link that we will put in the YouTube chat again. And once we run out of questions, we're going to wrap up for today. Chris, first up, we've got, can we get public datasets the same format as you're showing us right now, System User Assistant, which we can match to our target application? Do these exist? I mean, you can in some senses. Yeah. So I think if I understand the question correctly, InstructTune data sets are exactly this. They have that information in exactly this kind of format so that you can fine tune your, uh, whatever LLM you're using to use the, uh, use this similar kind of, uh, format of having that system user and assistant. Uh, though you, you might find that the language for system user and assistant is very non-standard. Uh, it might be like instruction input output or whatever it happens to be, but yes, you can absolutely find data sets that match this format. Yeah. Yeah. DALI 15K comes to mind. What else comes to mind today for you, Chris, in terms of specific data sets for them to look at? Like the originals, Alpaca, the data set that kind of kicked us off into this open source journey recently and then derivatives of it. There's also the the the Flan dataset, which is many millions of tasks. So you have the option to there's there's a lot of good options for these. Bloom XP3 comes to mind as well. Yeah, this is this is a question there's a lot of good options right these little max v3 comes to mind as well yeah this is this is a question we get a lot in the courses that we've been teaching so yeah very very important idea this idea of instruction tuning and it fits in with a lot of what we're talking about today um all right so does chat GPT 3.5 turbo analyze images? No. No, it doesn't. Hopefully we're getting that GPT 4 multimodal release at some point in the not too distant future. That'd be cool. That'd be cool. All right. Next up, we've got, is Chainlit also compatible with other open source LLMs like Llama or Falcon? Oh, you betcha, yes. Absolutely. So it is compatible with all, anything that you can put into Python, technically you can put into Chainlit. It's compatible. If the LLM is compatible through Langchain or Llama Index, it's a very easy lift to get it to working in Chainlit. But even beyond that, you can set things up in such a way that it's going to be able to be leveraged. But yeah, absolutely. Chainlit itself is LLM agnostic. Yeah, yeah. So what would the flow be? What would you need to change in the application we did today to sort of insert an open source LLM rather than OpenAI? Yeah. So, I mean, all of the places that we reference OpenAI kind of in the settings, in the provider for our playground, for the actual responses that we get. So that open AI chat completion endpoint, those are the things we have to plug and play with our open source LLM. Again, the easiest way to do this is going to be through a framework like a langchain where you can just, you know, use it as an LLM object and it's built the, the compatibility. So I would wrap my open AI or my open source LLM in a Lang chain LLM, and then use it as any other, uh, Lang chain LLM in. So you're saying that even going to just try to make this open source, it sort of requires a slight next layer of complexity added to our LM application infrastructure. I would say for the most part, you wind up in that situation. Yes. Yes. Okay. Okay. Very cool. Very cool. All right. So we've got a question. Can you track conversations with multiple users, eg user zero, user one, assistant, et cetera? user 0, user 1, assistant, et cetera? Yes. You would have to do that not in Chainlit. So you would have to have a different service to do that. But you can. That's not to say that you need, like, another application. It's just going to be additional logic in Chainlit. I believe that there's a cookbook example from Chainlit that shows two different LLMs returning the responses at the same times. But in terms of anything to do with like tracking or, you know, these kinds of things, they're going to require you to engineer a solution or attach a pre-engineered solution from a different library. Got it. Got it. Okay. So if I build an app using this stack at work, will my prompt inputs and outputs stay private? How does this work exactly on the privacy front? So your inputs and outputs, they aren't by default stored or monitored or tracked in Chainlit. So in some sense, yes. There's a possibility that the history could leak your inputs across different users, though it shouldn't. I'm just not, I'm not going to say 100% it won't, but it doesn't actually keep a record of your outputs that are visible to others in that sense. Yeah. Yeah, and I think we were having this discussion recently, Chris, of your outputs that are visible to others in that sense. Yeah. Yeah. And I think we were having this discussion recently, Chris, the OpenAI API, if you're hitting the API, the language I believe they use is that it's not going to be used to train future GPT models. Your data is not gonna be used to train future GPT models. Whereas if you're just leveraging ChatGPT, it's highly likely that your information is going to be used to train future GPT models. Whereas if you're just leveraging chat GPT, it's highly likely that your information is going to be used to train future models. Yeah. So if we're talking about specifically the interaction between open AI, yes, the API access means that you by default opt out of using that data for training. Everything you put into ChatGPT is used to train. And without opting in, nothing that you use through the API is used to train. So that's a good way to think about it. Absolutely. But the information is still passing. It's definitely not private. It's just not being used to train. Right, right, right. Not being used to train. Yeah. so that's kind of an interesting application you know if that's of interest to you to interesting reason sort of go build one of these yourselves okay so uh next question is there a way to introduce versioning in index i.e i want to have docs for different releases of python libraries and the assistant ask for the version that I'm using at the moment. Versioning and index. I'm not sure I fully understand the question, but you can absolutely set up logic to make decisions based on the version of Python you're using or the version of an index based on some tracking. If you're using Lama index, I know there's like a native integration with wand B, so weights and biases that will actually let you build a version index. Uh, so you might want to leverage that tool to, to help keep track of your, your indices, otherwise, uh, traditional kind of data, uh, lineage solutions like your, you know, DAGs hubs and, and whatnot are going to be the way to go. All right. All right. Um, is this all doable in English only? Any other languages supported? How can we build an app for non-English language? So because it's LLM agnostic, whatever LLM you want, you can plug in. That includes non-English LLMs. Most of the larger closed source LLMs do have fairly decent language support, but yeah, the lowest hanging fruit is to use a model that's tailored designed for whatever language you're trying to use. All right. A couple of quick questions to close us out here today. What's temperature in the setting exactly? Temperature is, you could think about it like as creativity. So, you know, what happens when we are trying to do an output? There's kind of like what should come next. So the most likely next token. Temperature is a setting that says, what if sometimes we just don't choose that one? We choose a different one, right? And so it introduces some noise into our generation that can, you know, it lets the model act more expressively and creatively. It's a great thing for creative generative tasks, but it's maybe not the best for really systems where you want it to be very extractive. So, you know, for maybe summarization tasks or tasks relating to retrieval and question answering, we might not want the LLM to get too creative, but basically what it is, is sometimes it won't choose the most likely next token. Very cool, very cool. All right, Chris, well, I think that about wraps up the Q and A portion of today's event. That was super awesome. Thanks for all the demo vibes today. Thank you everyone for your participation. This brings us to the end of today's event, which has been brought to you by AI Makerspace and 4thBrain. Our AI Makerspace community is still just getting started. We just finished our very first LLM Ops, LLMs in Production Cohort 1, and it went great. The latest version of our curriculum for advanced developers includes deep dives on the latest and greatest tools for building truly complex LLM applications from Langchain and Langsmith to Lama Index, Weights and Biases, and much more. Cohort 2 launches next Tuesday, September 19th, apply today. And for those of you who enjoyed today's demo, but still feel like you're really just getting started building with LLMs, we highly recommend that you check out Fourth Brain's upcoming Building with LLMs course. This is a more entry-level course where you'll learn the basics of fine-tuning, of laying chain, and of really how to take a simple application like we saw today to just the next level, rather than deploying it and operating it all in production environments and at scale. This course from Fourth Brain serves as a perfect precursor to AI Makerspace's LLM Ops, LLMs in production course. Also, please give us a follow at AI Makerspace on LinkedIn and YouTube and at AI Makerspace on Twitter to stay up to date on what we are building, shipping, and sharing. And until next time, we hope that you keep building, shipping, and sharing. We'll see you all next time. Bye guys. See ya.
Beyond ChatGPT: Build Your First LLM Application
3,678
AI Makerspace
20230912
GPT-4 Summary: "Launch Your AI Journey: Beyond ChatGPT Workshop Awaits! Ready to build your first Large Language Model (LLM) application? This workshop is your gateway! Discover the essentials beyond classic MLOps, delve into OpenAI's API, and deploy GPT-4 effectively. You'll learn to craft a smooth 'chat-style' interface using Chainlit and perfect your prompt engineering skills. Plus, we'll guide you on making your application public via Hugging Face Spaces. Ideal for data scientists, ML engineers, and anyone eager to create chatbot-style LLM applications. With OpenAI API, Chainlit, and Hugging Face Spaces, you'll be showcasing your LLM project in no time. Join us and step into the world of LLM Ops with ease!" Event page: https://lu.ma/firstllm Have a question for a speaker? Drop them here: https://app.sli.do/event/upxYiBkzrtgSob4pQe9hFQ Speakers: Dr. Greg Loughnane, Founder & CEO, AI Makerspace https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO, AI Makerspace https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply to our upcoming LLM Ops: LLMs in production course on Maven today! https://maven.com/aimakerspace/llmops How'd we do? Share your feedback and suggestions for future events. https://forms.gle/ZAHhGuYVU2c3znbq7
2024-06-13T22:34:50.583339
https://www.youtube.com/live/hrzjcsai6DI
Hey Chris, you hear about that OpenAI GPT store? I sure did Greg, yes. So like, is it as awesome as everyone is saying? I think so, yeah. It's pretty cool. Oh nice, nice. So does that mean it's the answer for everybody, consumers to enterprise alike? I wouldn't say everybody. No, no, not close. Like even the enterprise edition. Maybe that one's the answer, right? In some cases, definitely, but definitely not all or most. So it depends. Something like that? Strong, it depends. Yeah, okay, okay. Well, today, let's see if we can demystify and break through all of the hype, get to the bottom of what really matters, and what folks trying to be AI business leaders today can do to create some value for themselves, for their companies and the world. Sound like a plan? Let's go. Let's do it. All right, we'll see you back in a bit, Chris, for our first demo. Thanks for taking the time to join us for today's event. Fresh off the back of OpenAI's Dev Day last Monday. My name is Greg Loughnane and I'm the founder and CEO of AI Makerspace. Today, you'll learn about the tooling and processes you need to be thinking about as you and your team set out to build generative AI applications from 2023 into 2024. We have a special guest today, Evan Shai, CEO of Coding Temple, who we're excited to partner with today, multi-time founder, built some great companies, and he'll be joining us for the Q&A portion of today's event. And of course, we have the LLM wizard and CTO of AI Makerspace, Chris, who we just saw and we'll have back on the stage soon. So with that, we're going to get right into it today. We're going to talk about how to build your competitive AI advantage today in the crazy, maybe the craziest market anybody's ever seen. At AI Makerspace or AIM, we like to align our aim every time we set out to teach anything. And by the end of the day today, you're going to really understand, hopefully truly grok some of the tooling and the processes that you need from low code and even no code to a little bit of code to develop the proof of concept and MVP level apps you want. And then also when we think about building your own GPT or building chat GPT for your private data, what are the actual steps you need to go through to get where you're trying to go with this? And what are some of the things you should think about both before and after you start playing with the tools all right first off to all the business leaders out there i know you're all asking yourself today like how do i judge the new latest and greatest tool the auto magical platform that somebody's always trying to sell me. Nowhere is this more obvious than the GPT store that was released since last week. And so we're going to dive in. We're going to try to figure that out. And this idea of how to build chat GPT for your data coming up nearly on the one year anniversary of chat GPT. This is a big question every company is asking themselves. And hopefully we're going to really get to the bottom of what we need to do to make this happen and how we can think about creating value for our products and our customers by the end of today. First, though, some context before we get into prototyping with some of this new tooling and the process you need to think about when you actually go and build these prototypes, when you actually start empowering your team to get after it with your own private data. All right. To contextualize everything first, I mean, I know the entire world is focused on LLMs and so is this particular presentation. Generative AI is all the rage, but it's important to remind everyone that the value in AI today that's been created to date, these are the solid circles inside of this chart here, most of the value is not in generative AI. Most of the value exists in classical ML and classical deep learning, in something called supervised learning. Now, you can just think about that as sort of the original machine learning and the companies that have built really, really powerful systems. Many of them leverage this simple tool that's going to continue to grow in the next three years. This is the more faded circle around each of the solid circles here. more faded circle around each of the solid circles here. However, generative AI is going to grow many times its size as it's doing in real time in front of us. And this is where a lot of the focus is going to be for companies heading into the new year, just because of how much buzz surrounds this entire idea of building a GPT for yourself. But again, this is where most of the value has been. Most of the value many think will continue to be in the supervised learning domain. These are potentially easier problems, but they don't use as but they don't use as new or as interesting of tool sets. Stepping back even further, if you're a digital product company, the important thing to ask is, how do I improve my product for my customers? Not how do I use AI? And so at the end of the day, most applications that you've set out to build to improve your customer experience, they still don't need AI. And of the ones that do need AI, most of them still do not need generative AI. Okay. That's just my caveat of the day here. But of course, every board, every CEO, every business leader wants to be up to speed with the latest and greatest. And of course, every board, every CEO, every business leader wants to be up to speed with the latest and greatest. And of course, that's generative AI and LLMs. And when you start building with generative AI and with LLMs, the line between prototype and production becomes even finer than before. We used to call it going beyond the Jupyter notebook, and we tried to make sure that the models we built in a notebook environment also performed the same in production. This was not as hard as it is to sort of guarantee the output of a large language model application. model application. Once you ship it to production and the large language model starts engaging with your customers and stakeholders, it's not always clear exactly how it's going to behave in all situations. And this is the problem of evaluation. How do we know something that comes out that is generated character by character, token by token, or word by word from a large language model is right or is good or is high quality. These are often slippery and ambiguous terms. And so we need to be a little bit more careful before we put these products in front of customers at scale. We need to prototype more quickly and get to production more quickly, but for the early adopters among our customers. And we need to dial in what evaluation actually means so we can retrain our models in the right amount of time and with the right data. And we can continue to make sure that that customer experience is improving and never falling off of a cliff because of some crazy outputs that our models might provide that we didn't see coming. And this idea of AI transformation as we deploy more and more advanced models to production, it's really important as an AI business leader that you kind of go all in on this. There are, of course, no promises of massive upside. You're going to make hundreds of millions of dollars and capture the entire market. But it's more that if you don't start investigating these tools and you don't start adopting them, then you might be opening yourself up to massive downside. And the thought leadership that's going to be required to keep up with this state of the art. I hear from CTOs all the time at AI Makerspace. This is the fastest they've ever seen anything evolving or maturing their entire careers. And that pretty much says it all. If you want to stay out there on the edge, it's going to be harder than ever before because it's going faster than ever before. So if you want to be an AI business leader, best practice, make sure you go all in on that AI transformation at your company. Now, if you decide that's something that's for you, then the first place, the unfortunate place where this always starts is you need to get buy-in from the top. And unfortunately, the truth is that it always starts here. Well, AI is existentially scary to our competitive future. If we don't do this, competitor A, competitor B, competitor C, they might eat our lunch, take our entire business. And so the worst strategy is to just not start at all. Okay, now they're listening. Now what? Well, now we demonstrate this idea of the AI transformation flywheel. Hey boss, listen up because this is what we're going to aim at. We want to develop our giant industry leading competitive advantage flywheel that's based on our data and the models that will continue to improve. We need to build this flywheel out of smaller wheels. We need to get started with pilot projects. We need to identify the right things, the lowest hanging fruit to spend time on. We need to get those into production, start seeing some small wins with our customers. We need to expand the idea of how we did this out to other folks within our company. And of course, if we need to, we'll grow the team through hiring. But hiring is an often misused strategy in a marketplace where the only way to keep up is to be constantly learning literally every week of your life outside of work. So this consulting and training and hiring, it's going to be some combination of these. There is no one silver bullet and somebody's got to lead the hub in the center. And if that hub in the center is going to follow along with best practices of other companies in the space, you're probably going to develop a place where people can come to talk about why they would do a specific AI project, about what they should be building with, about the tooling and processes and team structure they should be building with, about the tooling and processes and team structure they should be thinking about in their organization or functional department, what they need to do to train their folks or to even understand themselves what's going on. And that internal consultancy has to be led by someone, the generative AI business leader. That's going to be the seed through which you can grow the AI DNA into your company. What does this look like practically? Well, there's two functions that may not exist in your company yet. One is AI product management, and the other is AI engineering. AI engineering is a broad term that encompasses all of prompt engineering and data science and everything sort of applicable on the edge of what matters today. Aligning stakeholders down to the classical engineering data science teams is the task of these two groups. And of course, the AI product management function is not going to be built out first. It will be the AI engineering function under a suitably motivated AI business leader. So that's going to be you and the small AI engineering team that you start to build these pilot projects with. So that's the idea is you want to oversee this hub. You want to develop those pilot projects with. So that's the idea is you want to oversee this hub. You want to develop those pilot projects of lowest hanging fruit. You want to always be focused on your customers, your products, not on AI first, but on your customers and products first, then on AI. And of course, you're going to have to be the local influencer, the evangelist, and the place where people can go to find answers. And you won't have them a lot of times. But if they come to you, you just say, I don't know yet, but I'm going to find out and get back to you. One quick success story for people to think about as they think about their product suite, their product line. Airbnb has a great example because Airbnb is not in the AI business. They give you a place to stay anywhere in the world. And sometimes when you go other places in the world, you have to speak other languages, but a lot of people don't. And so rather than trying to get people to learn new languages, they can leverage AI to make communication between guests and hosts easier than ever. Because state-of-the-art translation is a tool that is ready for prime time. When you think about your particular problem that you're trying to solve for customers, for stakeholders, think about how AI can augment that problem and make a better solution. Again, don't start with AI, start with your context. So why isn't AI adopted yet? Well, part of the reason is because these teams that we talked about haven't been built out yet. And it's hard to figure out how to build these teams with the minimally sufficient people within a market and an economy that's seen increased interest rates, have seen big layoffs, have seen an overall desire to do more with less. Building out massive teams doesn't make any more sense than it used to. And it's always been something that's been a bit hard for most companies to do anyways. This is the long tail problem of actually rolling out solutions at scale. We've seen companies like Facebook, like Google, capture entire markets for things like web search, and advertisements by building out these huge teams. The entire long tail of small and medium sized businesses are never going to be able to build out these huge teams. They've been waiting for the low code no code to a revolution. And it's coming. It's here. If you know where to look, and you can be one of these early adopter companies if you're attending this event that's probably of interest to you by knowing what to pick up and what to start building with today you see it's not about the large language models those are going to continue to improve and it's not really about the killer application side either the general models are going to continue to improve. And it's not really about the killer application side either. The general models are going to get better generally. The killer apps are going to get better in a low-code, no-code way as we've seen, like with OpenAz GPT store. The killer app is just a chat. It's just texting like you would text your mom, your sister, your brother. It's just meeting the user where they are with a text-like interface that adds value to your business and to their engagement with your products. That's where we can start today. We can worry about additional killer apps in the future. We can worry about all of this other stuff in the future. Even the idea of a hyper local AI model trained on all of your proprietary data. We don't need to boil the ocean to get started today. Although that may very well be where this heads. Rather, we can get started with prototyping. And the first place to start, of course, Rather, we can get started with prototyping. And the first place to start, of course, is with GPTs. This is starting to be kind of a hard acronym to track these days, as many jokes have been made about since last week. But when we're talking about GPTs, we're talking about OpenAI's recent low-code platform that you can actually build your own custom GPTs on. GPTs lets you customize ChatGPT for a specific purpose, for your purpose. They say the best GPTs will be invented by the community. Also an interesting thing to think about as you think about your own internal data science consultancy in your business. That's the community you want to build. Privacy and safety were definitely top of mind as they have been for so many people building these systems. Connecting GPTs to the real world, aka augmenting them with search and other tooling, making them more agentic, so to speak, is a way to actually improve some of the ways our users could interact with these models. Then, of course, everybody wants to deploy their internal only GPTs. It's important to note here two things. One is that we found this little checkbox on here as we were building this week and preparing this for you today, where you can opt in to use conversational data in your GPT to improve OpenAI's model. And we went ahead and clicked the checkbox on this for our little app today. And more generally, one of the things we're always reminding folks that we train to be AI engineers and we're training folks to really come into this game is the same thing that Sam Altman reminded everybody during OpenAI's Dev Day is they do not train on the data that goes in and out of GPT 3.5 Turbo or 4 or 4.5 Turbo now through the API or through chat GPT enterprise, which they've recently rolled out. They're saying they do not train on this data. That doesn't mean they don't have the data to look at. It doesn't mean they don't assess the data for whatever they might want to understand about their users. It just means they simply do not train any of their models based on that data. So one quick and easy way, if you've got sort of in-between data, that's not super important to be private, personally identifiable information, health records, et cetera, is to just use the API instead. And while we will show you how to do that today, what we're first going to do is we're going to show you how to build a no-code GPT with the absolute latest and greatest tool from OpenAI. And we're going to do it using some Coding Temple data. Partnering with Coding Temple today, we found some public, rich content that they have on their site and you probably have on yours as well and this is often the easiest place to start to start getting buy-in from folks at the top and so to see exactly what we did with this rich blog content and building our own gpt i'm going to send it over to the llm wizard himself chris alexiuk who's going to send it over to the LLM wizard himself, Chris Oleksiak, who's going to walk us through it. Chris? Thank you, Greg. Yes. So the idea is today we're going to build something just like this, which is a custom coding temple GPT. It's going to allow us to have the ability to query some of their documents as well as, you know, inform the tone and style that this a particular gpt is going to respond in so we have a sample question here how could i combat imposter syndrome and then we get some information this information is fairly you know straightforwardly from their blog so it's good information sense, and it's grounded by that rich content that Greg was talking about. So the idea is, you know, how do we make this thing? Well, with the new GPT builder, we can go to my GPTs, we can click on create a GPT, and then we can build one. GPT and then we can build one. You'll notice that we have this message GPT builder section, which we can do things like say, hey, you know, can you name my GPT coding temple GPT? And then we can send that message off. The idea is that we can interact with this tool in a absolutely no code way. We can interact with it in a completely conversational way, and it's going to be able to update our settings. It's going to be able to change what our actual application looks like. And as you can see, it's going to suggest things to do next like it's currently generating a profile picture it's going to do that with you know it's basically going to be able to do that using dolly which is amazing now if you want a more kind of traditional experience we can also do the configure option the configure option is going to help us to, you know, just interact with this like we normally would. But you'll see here that we've generated this sweet picture, which looks like a temple of coding. And we have this coding temple GPT rename, and it's changed everything we need. We can lock this in and say, yes, that looks great. Now let's head over to our configure option. So you can see that it's updated the name. We could add ourselves a description. And we can also go, you know, go forward and give it some instructions, right? So maybe we can say something like, you are a professional. Professional, you always answer politely and courteously. But if I could spell though, right? So, oh man, I really, really botched that one. But this is the idea. We can give it instructions. Let's give it something silly. And you speak like a pirate. We can add conversation starters. So things that our users can click on to begin their conversation. So we can say things like, you know, how would I combat imposter syndrome with a or in a in a programming, you know, role. Right. So now this is a programming role, right? So now this is a conversation starter. We can upload files. So just like traditional RAG, we can upload files. So I'm going to go ahead and upload a HTML document that I got from the actual blog from Coding Temple. We can upload this as a PDF, and then we can have it parse the PDF while we're interacting with the system and give us answers relevant to that PDF. We can give it the ability to reach out to the web or if we prefer not to, we can shut that off. We can let it generate images and we can let it create and run code if we needed to. You can see we can upload a bunch of different files if we wanted to. But the idea is that we have the ability to build our own retrieval augmented generation system without ever touching a line of code. You know, if we go to our additional settings as well, we can opt out of the setting our data so that they don't use it for training, which is very nice. We can even add custom APIs here that we've created or others have created to really elevate our system to the next level. And then we can look at a preview, right? So we have this Coding Temple GPT. We can click on this preview, and it's going to reach into those source documents for us. It's going to decide which context it's going to use and then it's going to give us an answer that's relevant. And as you can see, because of our instruction, we have the classic example where it's going to speak like a pirate. Obviously, this is just a toy example. But the idea is that we can shape how our system interacts with people we can shape what resources it's going to leverage and we can do all this without ever writing a single line of code once we have this done we can go ahead and save this and we can share it to others so they can use our custom gpt um it's just a really powerful system that's really good at you know letting you prototype extremely quickly regardless of your your programming or coding experience. So, you know, very powerful tool. And that is, you know, your custom GPT feature from OpenAI. Super powerful, super awesome. And with that, we'll head it back to Greg. All right. Yeah, Chris. Looks like it does what it says on the 10. Very cool. I guess that's a wrap, right? Well, actually, what we just saw was that tool, that no-code tool is just a wrapper for the OpenAI model. It's a no-code solution to create a simple RAG or Retrieval Augmented Generation system with just one document there. We just used one blog. And it's great to get something up and going, to maybe show the idea, to get users or stakeholders engaging with it. But where do you go from here? Where do you go if you say, I want to take this and I want to use it for my business? I want to use it for the enterprise. Well, let's think about why we might not want to use OpenAI's GPT tooling. Well, one fun example from this past week that has been getting some traction online is, well, maybe the data is not as secure as people thought it might be. Here's an example from the co-founder of levels.fyi, which is all about tracking salaries, stock vesting schedules, and other information about folks getting jobs. And the conclusion here is maybe it's best to treat any data you upload to OpenAI to be as good as public. I think that's a conservative view of the world especially if you're actual competitive advantage remember it's not in the models anymore it's in the data so levels levels has ostensibly the best data on levels um if you lose the data you might lose your advantage. And shout out to Antony here who dug in and found he was able to access the data from level FYI's GPT, which included those salaries, those vesting schedules, and even gender information. So this definitely highlights one thing you might want to avoid. And one of the things on every business leader's list of potential trouble they don't want to get into is data leaks. But what about other stuff? Well, you know, think about the fact that there's been no engineering whatsoever done. There's been no integration into existing products. There's been no engineering whatsoever done. There's been no integration into existing products. There's been no instrumentation of how to actually collect data on usage or on anything else. And of course, if you send this to somebody, what you're essentially asking them to do is today, you're asking them to pay for an OpenAI Plus account so that they can even engage with your GPT. They're not paying you then, they're paying OpenAI. And you probably saw that Chris did this ahead of time, so we didn't see it in real time. It's because it's not the fastest tool right now to kind of do this type of work and it's hard to exactly understand how you might go about really really speeding it up so when you start to talk about okay now i've got my poc what do i do from here to iterate on it well one of the things you want to do is you want to actually go iterate on it well one of the things you want to do is want to actually go probably build an app that you can do in a low code way but that gives you the extensibility that gpt level apps do not there's a big difference between extensible low code and no code in this game today. So let's back up to first principles. If we're going to build LLM applications, generally we want to start off by prompting as much as we can. How far can we get with prompt engineering? From there, if we need the model to understand additional data it wasn't trained on. One way to take a cut at this is to build a question answering system, AKA a retrieval augmented generation system. One that essentially leverages your data. Whether we're talking about question answering, whether we talk about augmenting generation, or whether we talk about chat GPT for your data, we're talking about the same thing. A chat interface that takes into account stuff that is important to your user and your context. When it comes to prompting, the top, top rule is write super clear and specific instructions you probably saw in chris's demo he was giving instructions to the model so it knew how to engage with the user but that context is oh so important this is that you are an expert in coaching people who want to grow their online presence, let's say in the imposter syndrome example. Or even if you give an LLM something like you are very hungry or tired or angry today, then it's going to have a very different output and result. This context could also include that data of yours. Of course, you can provide examples, one shot, two shot, few shot learning. You can provide thought through and reasoned examples through chain of thought reasoning approaches. And you can ask for outputs in specific formats to really get your prompts dialed in. You can go very far with simply prompt engineering. To take it to the next level, to do question answering over your own personal documents, to do RAG, we want to understand why we would want to do this in the first place. Well, LLMs, when you just prompt engineer them, they oftentimes will lie to you. They'll say things that aren't true. This is a problem I noticed when I was playing with the GPT model we put together as well. It said, well, I actually don't have any information on that in my source documents, but here's an answer anyways. Okay. Okay. All right. This confidence in false answers, this is the false positive that AI practitioners are so familiar with. How do we get around this? Well, in business as in life, as an AI, we need to make this fact checkable. We need to make sure that we have references to things that we're saying are the answer in our applications. When the LLM is able to retrieve references, find those references in the source documents, augment the prompt with those references, adding those bad boys directly into the input prompt, then we get improved generations, improved answers, a better experience for our customers, and places where they can go to find even more information. This is called retrieval augmented generation. Retrieval, finding the references, augmenting the prompt with those references, and improving the answer as a result. When you have very specialized domains with really specialized words and jargon, there's even more that you can do here when building these RAG systems. But it's really obvious that if you have really specialized words you use, you might want to start by putting those into your application a little bit, and you can start directly by putting them in with your PDF level data using a RAG setup. So just to give you a quick overview of this, because we're going to go from no code to low code here. There's three easy pieces to understand to retrieval we ask a question we have an input we search our database of stuff that's similar to the question and we return stuff that's similar the way this looks is we take our documents like our little blogs we chunk them up into little pieces, pass them through what's called an embedding model that takes the tokenized text and actually creates word embeddings or vectors from those chunks. And then we store those or a vector store or an index these are all synonyms once we have all of that set up we can ask a question what's the deal with imposter syndrome and we can get some similar reference material out of our data for coding. We've got a blog for that. So the way that this looks is, you might ask what should AI business leaders know to succeed these days? We're gonna pass that through an embedding model, turns it into a vector. We're gonna go search our vector database. We're gonna set up a prompt template. We're gonna do a little bit of coding here, but it's in natural language. So it should be okay, even for business leaders to understand we say use the provided context to answer the users query this is our instruction you may not answer the users query unless there is a specific context in the following text if you do not know the answer or cannot answer, please respond with, I don't know. Very important. So we take our similar source material, our references, and we pass those bad boys directly in to the prompt. We stuff them in to the prompt window, the context window. in to the prompt window, the context window, we shove all of that into our LLM or GPT, if you will, and we get our answer with source material. Now this is a rag system in general, but for boots on the ground, if you're looking to go start building, yeah, see how far you can get with prompt engineering. Yes. See what you can do with the OpenAI GPT store. But always start with synthetic data and closed source models as you're getting going with your POCs all the way up to your MVPs. If you can get away with similar data, closed source models, you can build faster. Okay. Speed is paramount. Once you have it set up with synthetic data and closed source models, move to open source models. Once you're able to do that additional extra lift of open source models, now go ahead and add your private data because now you're not concerned about anybody seeing it. Baseline your performance there. Is it solving the customer problem? Is it actually adding value? Then from there, you can reduce the model sizes. You can do fine tuning on embedding and large language models. You can keep iterating, collecting user data, improving, doing the LLM operations that are going to be so important for your long-term success. Some companies have even more to keep in mind. For instance, are you in the space industry? So you need on-prem hardware. You can't even use the cloud. What scale and speed do you really want to be rolling this out at to start with? Is it 10 people? Is it 100 people? Is it 1,000 people? What kind of speed do you really want to be rolling this out at to start with? Is it 10 people? Is it 100 people? Is it 1000 people? What kind of performance will you need? How quickly will users require they can interact with that? What will that look like at scale? Truly at scale? Does that mean that you're going to need to train smaller models so that you're not hosting these big mega models to solve this simple downstream task? What's the process for going about fine tuning your way down to that sort of solution? The one that can be deployed at scale? What does that actually look like? Well, that's a topic for another day. For today, we're going to take it from no code to low code. And we're going to take it from no code to low code, and we're going to show you that if we add a little bit more code, we can add a little bit more data, making it a little bit more performant and a little bit easier. So we're going to welcome Chris back up on the stage now to show you how to think about this from the AI business leader level, and also how to give you a little bit of material that you can get your budding, aspiring AI engineers kicked off with today. Chris, back to you, man. Hey, yeah, thanks, Greg. So basically, the idea is we're going to use this Chainlit application here to chat with our PDF we're gonna pass in that same PDF that we uploaded to use our uh you know our custom rag that we built with uh GPT so our custom GPT if you want to call it that and you know we could ask the same kinds of questions uh And, you know, we can ask the same kinds of questions. We can ask things like about business use cases, right? So we have a kind of question queued up here, which is this about, you know, AI business leaders, right? So what does an AI business leader need to know about building internal training programs? You can see that we're using this chain right now. We have this question, this chat chat history using the stuff documents chain we're going to provide a bunch of different sources and then we get this response right now busy at least know several things and this is all information that's derived from the blog uh written by the people at Coding Temple. So it's a really powerful tool to, you know, kind of move us into the next phase. Now, the idea here is that, you know, we have the ability to create this tool in a kind of much lower code fashion. We can, you know, see here that we get our sources back, which is great. And if we look at the actual code, you'll see that there's only 150 lines of code. Most of this is just boilerplate and prompts, right? And the idea is that's all we need to create this system. And the really key thing about this is we have full control over the same system template, but also how we're splitting our documents up. Also what kinds of documents we're setting, accepting, right? We have things like, you know, what kinds of embeddings we want to use, right? How we handle memory. All of these things are now at our fingertips, whereas before, you know, we would have had to do some, either a lot of different changes in prompting, or we'd have to just abandon the GPT tool, right? Since it doesn't handle the specific use case. So the idea here is that this is a simple thing that we can build on top of. And one of the first things you might think of is like, well, it's OK. You know, it's great that we can do this one PDF at a time thing. But what if we wanted to use the whole blog system, right? Well, this is where, you know, we can extend our application. So we're going to drop this collab link into the note, into the chat here. And the idea is that this is going to help you use a combination of both open AI and cohere in order to let you, you know, scrape all of the blogs from coding temple, chunk all of them blogs from Coding Temple, chunk all of them, and then combine them into some documents and then do some rag on top of those documents. You'll notice that we're going to use a more powerful embeddings model. So this is the Cohere embeddings v3 model, which is currently at the top of the embeddings benchmark, right? It's even beating out things like ADA. So GPT's models, it's beating out a ton of different embeddings models. And we're also using this re-ranking pipeline, which is going to basically cast a very wide net of contexts and then use a special model to re-rank those contexts into the most relevant pieces of context. So it allows us to do this kind of inefficient search on top of an extremely efficient search, but on a small subset of the results. Very powerful tool. And with this tool, you'll see that we can, after some setup, we can ask questions like, what are some of the ways to avoid imposter syndrome, right? And to avoid it, we can engage with supportive online communities, seek feedback, we get great advice, basically. We can see the sources that we use to get this advice, including some blogs that are about, you know, becoming your comprehensive six head uh you know this is a guide uh you know discover how coding in public helps battle uh imposter syndrome we have a bunch of different sources that we're pulling from here all from the coding temple blogs to get this response and so because we have all of those blogs instead instead of just one or two blogs, we can also ask the same question that we asked our single PDF application and get a answer that relies on that information. So the idea here is that we can extend these systems beautifully once we have them in a kind of code-focused format. And this isn't a ton of lines of code still right so we're we're really we're really able to take these systems from kind of that v0 the custom gpt and extend them all the way up to much more performative systems very straightforwardly using these tools like langchain, Cohere, and other APIs to help us get the job done. So that's the idea of what we're talking about today, right? You can prototype super quickly with those GPT style models, and then you can really hit the paint hard and get your kind of like lower code, simple solution, and then extend that into this kind of more performative system as you as you evolve and your team, you know, takes on new challenges. So we'll, we'll head back to Greg. But that's, that's how we do it in a in a more, more code focused solution. And you can take this code and send it to your engineering team, if you want and have them, you know, figure out what they'd like to do with it. And there you go. Yeah. Awesome, Chris. The extensibility of the Lego blocks are so important as we start building. I love it. And, you know, the takeaway from today is really that there is low hanging fruit out there, but you got to start with your products. You got to start with your customers. You got to get that And, you know, the takeaway from today is really that there is low hanging fruit out there. But you've got to start with your products. You've got to start with your customers. You've got to get that flywheel going. It's easier than ever to get it going once you know what you're trying to build and why. That's where prompt engineering comes in. You pick up a quick GPT. Build it. Fake data, remember. Synthetic data first. Closed source models first, start understanding how to go to a low-code solution. You can take our chat with your PDF, you can duplicate it directly on Hugging Face. There are many other similar applications like that on the platform. You can build with OpenAI and then an open source embeddings model, then you can build with an open source model, and then you're completely open source, and you don't need to worry about anybody messing with your data. As you get into RAG++ or ChatGPT++, you're no longer concerned that your data is being used to develop this thing, which is ultimately where you need to be. to develop this thing, which is ultimately where you need to be. So with that, I'd love to welcome Evan Shai and Chris back up on the stage for the Q&A portion of today's event. So if you're sticking around, you still have questions, definitely go ahead and copy this Slido link, hit the URL and upload your questions. We'll be sure to get to as many as we can. Guys, thanks for joining me. Thank you gentlemen. Thanks Greg. Thanks Chris for both the intro and partnership and great presentation. I'm certainly thrilled to see so many people joining us here today, explore the frontiers of AI and business, if you will. I know we share the belief that developing AI capabilities internally is no longer optional, but imperative to remain competitive. That's kind of what we focus on at Coding Temple is technical skills broadly. We help individuals and organizations develop these critical technical skills and end up training a little over a thousand professionals every year in data analytics, cybersecurity, software engineering, and increasingly through partnerships like this AI. And what I have noticed is that there's just a lot of uncertainty, right, in terms of navigating this complexity, where exactly to start next. So partnering with AI experts like yourselves to help business leaders navigate that complexity, I think is critically important. And thanks for kind of shepherding us along that journey today. And I'm excited to dive into some questions. Yeah, awesome, Evan. Well, yeah, I think we've got a good one at the top here. Let me push it right back to you, man. With the rise of AI, what skills should our workforce be developing to try to stay ahead? If I'm a business leader, how should I be thinking about this problem? As you can imagine, this is something we think about, you know, often, obviously, and I think both upskilling and reskilling talent is critical. I think professionals, generally speaking, you know, should be focusing on developing AI literacy, and then maybe kind of understanding data ecosystems and how data plays a role in that process. And then just really the ability to work alongside AI, including the ability to question and refine AI outputs ultimately. I think at the individual level, and you touched on a lot of it through the presentation, start playing around with these tools immediately. I mean, they're pretty accessible. They serve as kind of a nice on-ramp, if you will, to start to intuitively think about, how can I make myself more and more efficient in work? I think when you start talking about kind of organizational wide leadership, it can be a bit more complex and actually because now you're talking about a set of skills and capabilities, not just data and AI, but also organizational strategy, talent, and culture, I think it's critical for organizations to bring help in from the outside to help, again, kind of navigate this constantly evolving space and experience workshops like this and conduct kind of deeper audits and analyses in order to see where the best opportunities are and where can we create a kind of a steady stream of training as well as talent to scale those AI capabilities. Yeah. Yeah. Yeah. You know, I just feel like there's a sort of like maturing idea of AI engineering that's coming into focus that we sort of see from AI makerspace. I mean, if you had to sort of articulate that, Chris, like, what does that look like in terms of folks that people have working for them today? Like, who should they be looking to to become the AI engineer of the future in their organization? They're looking at software engineers, they're looking at data scientists, security engineers, like prompt engineers. What's the deal with all of these titles? Yeah, yeah i mean there's a lot of titles for sure uh the the ones that i think ai is most in critical need of if we wanted to talk about you know uh uh what what roles uh you know are missing is like traditional software engineer backgrounds are going to do very well uh you know adapting to this ai workflow it is a lot of uh you know that traditional software experience that really helps you uh you know exploit or or really you know squeeze the most juice out of these ai systems uh but there's a place for all of the roles you mentioned i mean we need security is a a very important task with LLM applications. Data science exists and is even more necessary than it used to be in terms of guiding our direction. Where should we take these systems? How do we improve them? How do our users feel or experience our tools? And so it's, I mean, everyone has a home, but I think the most critical role right now to upskill is those classic SWEs to really get them leveraging these tools and kind of launch them into the stratosphere. Classic S-dubs. I love it. Okay. So next question from Islam. Do you think this GPT builder will possibly affect negatively what happens to people who are trying to pursue careers in ML, like building rag systems or computer vision. Anybody have a hot take on this? Chris, what do you think? Definitely not. Zero percent shot. It is a really good tool, but we have had really good tools our whole lives. And the thing that we have done every time our field, whatever version of software you're in gets a new tool, you know, in JavaScript and all these web tools came out. I mean, they didn't kill any jobs. They just moved where the talent spent their focus. And I think, you know, getting up to that higher level of abstraction and getting into a flow where, like, you can use these tools, leverage these tools to accelerate your productivity and your business's productivity. I can't imagine we're going to run out of use cases for these kinds of tools. And because of that, I think if you're looking to get into this field, it's still the best time ever to get into it. I mean, it probably will continue being the best time ever until we really design something that just takes us over so it's uh i feel pretty pretty strongly that this is uh not gonna kill anybody who's pursuing a career yeah so evan when somebody new comes to coding temple and they're sort of worried about the latest and greatest and you know they're looking at this sort of basic sort of foundational stuff they need to learn but they're sort of like but shouldn't i be uh focusing all my time on this the shiny object that just came out uh what do you what are you telling them yeah i think i would echo probably some of what chris just said i think you know these tools are going to continue to evolve and they serve as really useful on-ramps to expose you to the capabilities of what these technologies are able to do. I think still, especially as you want to kind of press down the capability spectrum, if you will, where real innovation and competitive advantage can happen, I think you still require some of these more durable foundational skill sets and understanding in order to be able to really lean in and create some advantages there. So we, practically speaking, are kind of constantly updating and evolving how we teach and train on these tools. But the kind of first principles, the foundational elements, I think are as critical as anything and will remain durable and provide utility to professional trying to get into the space. Yeah, yeah. And it's so interesting because even as you go up the chain within the company, it's important to know about that new new. And it's like the board wants to hear about it. They want us to be on top of it. They want to be able to tell their stakeholders that, yep, yep, we got it. We we know what we're doing and so if you can be somebody in that company that sort of you have to build a foundation but also is reaching out and understanding those on-ramps at the edge you know you're going to be and you're communicating that internally you're going to be you know forced to contend with within a company going through a transformation. So yeah, I think it just adds, adds a little something, you know, gets you quicker on the low code app game, you know, and keep, keep going there. So, you know, the next question, we've got an ROI question here, Evan, I'm going to kick this to you. So you can kind of take this, however you'd like here. What should businesses consider when investing in AI? If they're trying to maximize their return on investment today, what do you think? Yeah, it probably depends on where you are on the product life cycle to some degree. But I think, you know, when you do think about ROI of AI investments, I think you can push it beyond just kind of cost savings into value creation. I think that's where some interesting and exciting things happen, you know, whether it's enhanced customer experiences or new product capabilities, so long as those capabilities are delivering better customer experiences and customer value. I think, you know, that is where I would increasingly focus. I think in the short term, there are some natural like low-hanging fruit efficiencies that can be captured measured and articulated sufficiently to drive kind of buy-in and understanding within the organization which might help you get the resources you require to go after some of those longer tail value opportunities but if you think about the kind of long-term growth metrics and strategic objectives, you know, expanded LTV, brand equity, customer loyalty and retention, all of those types of things I think become interesting. Yeah. Yeah. It's such a bottomless question here. Yeah. It's almost like the answer is everything, you know, maybe consider everything a little bit. Okay. In the interest of time here, I want to see if we can get there a couple more this one Chris what do you think Islam will you be creating any videos for the RADI T paper that was released recently maybe you can tell us a little bit about this should we sure yes I mean so it's a raditz is pretty cool. It's you know, it's similar to the work that was done by the intent rag. So the original kind of rag pipeline, which involved that intent training it's, it's not, it's not nearly the same, but it's in a similar vein where we're thinking about these systems and how we fine tune them together. So it's definitely a cool and interesting thing. It's you know, it will probably gain more adoption as people continue to ask the question, should I fine tune RAG? And the answer is, well, yes. You know, there's situations where you love to do that. Okay, all right, got it. So maybe we'll name drop it Islam. I don't know if it deserves its own event or not. I'm not convinced yet, but all right, very cool. Last, we're gonna close it up with what are the most exciting developments in AI that you believe businesses should be aware of? Of course, GBT store covered it today. Obviously you're trying to get you guys up to speed. Evan, what do you think? What are the things are you paying attention to today that you'd encourage all the business leaders out there to keep their eye on? Yeah, again, I kind of just alluded to it. I think, you know, oftentimes we talk about the efficiencies that can be captured creating content. It's a legal document or marketing content. I would look kind of beyond that into what kind of new insights can this technology unlock for myself and our organization. So, you know, the ability to take in our own data safely and discover new customer value opportunities is certainly something I get excited about. It's something that you say require not only specialized tasks, but like a series of laborious specialized tasks to analyze and synthesize, right? So, that is something that we spend a lot of time thinking about and how we're going to ultimately leverage it. I would encourage other business owners and leaders to be thinking about it similarly. Yeah, yeah. And I guess like my two cents is kind of like start collecting data today that you think might be useful for tomorrow. You know, if you don't, and start collecting it in a way that you'll be able to use it later. most important data is the data you collect tomorrow there's another way of viewing the world and even if you view it that way set yourself up for that chris any thoughts on exciting ai developments that you're paying attention to there's just so many greg uh some of the the coolest stuff is how these optimization or inference optimization kind of tools that we're gaining access to as we move through the space, which lets you run these bigger models on smaller hardware or faster or better. I mean, there is so much in that ecosystem right now, from a serving and inference side, that I think is worth paying attention to, especially for businesses that are kind of locked out of these closed source or API based solutions, right? Every flop that you save or every, you know, GPU minute that you save is, is $1 back in your pocket. So I think it's worth paying attention to those services. Love it. Paying attention to scale before you need it. With that, we're going to go ahead and thank you so much, Evan. Thank you, Chris, for the Q&A. We're ready to wrap up everybody. Thank you for your participation today. This brings us to the end of today's event, brought to you by AM Makers you for your participation today. This brings us to the end of today's event brought to you by AM Makerspace and Coding Temple. If this session resonated with you as a business leader, you might consider one of our tailored corporate training packages built on top of our upcoming LLM Ops, LLMs, and Production production cohort, where aspiring AI engineers learn everything they need to know to build, deploy, and scale production RAG applications. Tailored training includes pilot project ideation, one-on-one coaching and mentoring throughout the cohort, post-cohort individual project consulting, and an internal demo day hosted directly by yours truly and you, business leader, for your executive leadership team. So if this is interesting, please go ahead and book with our team, and we'll be happy to have a chat with you, see if it's a fit. Otherwise, we'd love to hear your feedback on today's event and how we might bring even more value to you next time we get together and do some teaching and learning at the edge. Feedback surveys in the chat and until next time, keep building, shipping and sharing and we'll do the same. Thanks, everybody. See you soon.
OpenAI GPTs: Building Your First No-Code GPT
3,680
AI Makerspace
20231114
GPT-4 Summary: "Unlock AI Leadership and Innovation: Join Our Groundbreaking Event! As we approach 2024, the era of Generative AI thought leaders and business operators is here. This event is a game-changer for those ready to lead AI transformations in their organizations. Learn practical frameworks to identify high-impact AI use cases, develop rapid MVPs, and effectively guide your teams in building these MVPs with a focus on speed and iteration. We'll move beyond the hype, offering real-world examples, success stories, and access to open-source low-code and no-code tools. Perfect for aspiring AI thought leaders aiming to drive technical and financial success in their businesses. Be part of this transformative event and start reshaping your company and teams with the power of AI today!" Event page: https://lu.ma/AIadvantage Have a question for a speaker? Drop them here: https://app.sli.do/event/21uct7gUyYqQUiShuz3Xi4 Speakers: Dr. Greg Loughnane, Founder & CEO AI Makerspace. https://www.linkedin.com/in/greglough... Chris Alexiuk, CTO AI Makerspace. https://www.linkedin.com/in/csalexiuk/ Special Guest Evan Shy, Chief Executive Officer, Coding Temple https://www.linkedin.com/in/evanshy25/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply to our NEW LLME course on Maven today! https://maven.com/aimakerspace/llm-en... How'd we do? Share your feedback and suggestions for future events. https://forms.gle/NEdP86DUjH5fPGWp6
2024-06-13T22:46:47.217361
https://www.youtube.com/watch?v=q1XFm21I-VQ
. Hello. Is everyone excited to be here? Yes. Woo. Welcome to our first ever Dev Day. I'm really thrilled to have you folks join us. We are humbled by the community response and are super excited about this amazing turnout. This week at Snowflake Summit, we talked a lot about our new products, our vision for our future with customers, prospects, partners. But today, it's for the Builder community. It's for all of you. Snowflake customers or not, we want you to connect, share ideas, and skill up in data and AI. We want you to get inspiration from each other and from the industry luminaries we have lined up this afternoon. I took a picture of myself with Andrew to the side. I was like, oh my God, I'm with the demigod. I'm a developer. I'm a software engineer at heart. And I like love the little things that technology can do. I'm genuinely super excited when I check out new things. I wrote my first streamlit app. It was like all of 10 lines long. And I was like, holy cow, this thing runs inside, you know, Snowflake. I don't have to deploy a server. It just works out of the box. And of course, I shared that with Adrian. I was like, oh my God, you're writing a Streamlit app. And I was, you know, I get super inspired when folks like our PM director, Jeff Holland, he made this video. This is like this weird idea that I had. Hey, let's use container services to do some video transcriptions and then get structured attributes from those transcriptions. Use Cortex Search to put it in and have a chat bot. And he made that happen in a couple of hours. And I could then install an app to do the very same thing also in like 10 minutes. And I could tinker with it. And these are all great things because we are able to grow the community of developers that build on Snowflake. It's a strategic priority for us. So we're evolving and investing to better meet the needs of builders like you. And although we started as a source we started as a closed source product with an enterprise focus, we are opening up. We are becoming an application platform with a healthy dose of open source and community-led development. And you heard it before, we just concluded our first international AI hackathon featuring Arctic, our own true open LLM. And congrats to the winners. But we began investing in our developer program five years ago to support developers building data-intensive applications. It's our sweet spot. And the growth has been amazing. Thousands of people are powered by Snowflake already. And we partner closely with these companies at every stage to help them build, but also scale their applications with customers, like help them generate revenue. And whether it's providing build and design resources, specialized support, or go to market, we are the partner program. We are aligned with the growth of these partners. On Snowflake, you can have fun building and creating amazing startups that can change the world with our support. And hundreds of startups are building their entire businesses on top of Snowflake with a handful of them, including folks like Maxa, My Data Outlet, and Relational AI, earning millions from distributing their apps on the Snowflake marketplace. I met Moham earlier yesterday. I was like, dude, Snowflake ran an unpaid commercial for you for 25 minutes. That's what the keynote yesterday was. And we also make equity investments in these startups because we want to align long-term incentives. Earlier today, on this very stage, Big Geo, Scientific Financial Systems, and SignalFlare.ai were the finalists of our fourth annual startup challenge, and they competed for up to a million in investment from Snowflake winners. And the big winner is... Big congrats to SignalFlare.ai for winning the startup challenge. Please give them a big round of applause. Under the Snowflake Native App Accelerator Funding Program, we have partnered with 10 leading VC firms to invest up to $100 million in early-stage startups that are building native apps. We are also investing in training for our builders to help them skill up and grow their careers. Just this week, we launched the North Star Education Program from self-paced online courses and in-person workshops in all regions of the world. All of this for free. And check out the courses we just dropped on Coursera to start building on Snowflake. I feel very fortunate that we are all at the center stage where data, AI, technology is still transforming the world. It's a thrill. It's a privilege. It's also a responsibility. And we are very grateful to many of the luminaries, there's no other word for them, that are driving the transformation and are joining us here today as we kick off our luminary talk series. And I am delighted to welcome our first luminary speaker on stage, founder and CEO of Landing AI, co-founder and chairman of Coursera, and a formal Google colleague. Please welcome Dr. Andrew Ng. Hey, thanks for having me. Welcome, welcome. Andrew, it's a privilege, it's an honor, it's a thrill to be on the same stage as you. You've been around AI for way longer than most people. What was your AI aha moment? By the way, I went to grad school at Brown, and everybody then told me, this was like 20 years ago, 25 years ago, everybody's like, don't touch AI, nothing will come out of it. Wow. They were wildly wrong, but what was your big aha moment for AI? I remember when I was a teenager, my first job was as an office admin. And I just remember doing so much photocopying, photocopying, photocopying, photocopying. And even then as a teenager, I thought, boy, if only we could do something to automate all the solo copy I had to do, maybe I could spend my time on something else. That's why I wound up pursuing computer science and career in computer science and AI. And in fact, your remarks just now, I've actually forgotten. I saw you operate the Google Ads business. Now you're a CEO of a huge company. When you mentioned that you were writing stream rate code, I got to throw all of that. I think. You know, tech can actually be fun. That Streamlit one was fun. I was so excited to watch the video of landing AI and Snowflake working together, landing lens that we posted together on LinkedIn. That to me is like pure joy. As we're talking about AI, I have to ask, is there a billion-dollar model coming, you think, where people need, I don't know, 50,000 H100s to get started, step one? Yeah, definitely some people are thinking that way. It'll be interesting to see if we get there. Part of me feels like there could be cheaper, less capital-intensive, less energy-intensive parts as well to build highly intelligent systems. But on the other hand, I don't think we've squeezed all the juice we can out of sheer scaling roles. So that's also worth pursuing. And I just say I really appreciate the work that Snowflake's been doing as well on open sourcing Arctic. I think we need more contributors to do that kind of thing. To me, good things happen when technology spreads broadly, when lots of people can do the same thing. Otherwise it gets like naturally falling into the hands of a few, mean that we don't like get broad-based benefits so you know for me that's the reason why I hope models stay somewhat less expensive so that more people can develop more people can tinker and push all of us forward um a couple more questions you are at the US Capitol recently where there was this debate or open source model, AI regulation. Where do you land in this debate? Yeah, you know, at this moment, I'm actually very worried about California's proposed SB 1047, which I think would be very stifling for innovation in open source. So I feel like there's a technology layer, and technologies are useful for many applications. Then there's an application layer, which technologies are useful for many applications. Then there's an application layer, which tends to be specific instantiations of technology to meet a customer's needs. And for general-purpose technology like AI, it's impossible to stop AI from being adapted to potentially harmful use cases. So California's SB1047 poses a specter of liability if, say, someone open sources a model and someone finds some way to adapt it to nefarious means. And I wish we could guarantee AI will never be used for bad things. I wish we could guarantee computers will never be used for bad things. But if you say that any computer manufacturer is liable, if anyone uses your computer for something bad, then the only rational move is that no one should make any more computers and that would be awful. So I think Washington, D.C., fortunately, has gotten smarter. I feel that over the last year, you know, the White House executive order I had some concerns with, but I think the House and Senate have gotten decently smart. The schumer gang actually figured out ai and is more pro investing than pro shutting it down but actually really worried that here in california which is home to so much ai innovation there's this truly awful proposal on on the board just passed the senate vote going to the assembly next uh um uh that i think would be awful if it passes so we'll see i all of you, go fight the fight. SP-1047 is an awful idea. People forget. I think it is really important to reiterate what Andrew just said, which is all of us need to understand that AI is a technology. And yes, there'll be good things that come from technology, but there'll also be bad people that use technology. And yes, there'll be good things that come from technology, but there'll also be bad people that use technology. We need to make sure that laws cover those things, but not either make a hero or a villain out of technology. There are going to be all kinds of different use cases that we as a society need to be ready for. Okay. One other, please. And to be clear, I'm pro thoughtful regulation. That's right. Take out the harms, regulate against harmful applications. I'm pro thoughtful guardrails, but when regulations puts in place impossible requirements, then I think the only thing that will do is stifle technology and stifle innovation. And that's the thing to remember, which is premature regulation can be super stifling because it introduces so much risk. Okay, topic du jour. We know that language models, whether it's GPT-3 or 4 or the LAMA models or ARTIC, were big steps forward. But the buzz these days, which you have written about, which you have thought a lot about, is agentic AI. Can you tell us what it's all about? Yeah, so I think AI agents, which I'll chat about later with the presentation as well, is significantly expanding the set of what can be done with AI. I feel like with a set of AI tools and large language models that are working, and the work on Cortex is brilliant, frankly. And I find that when you're built on top of these tools, we can even further expand what is possible of a large language model. And in terms of AI technology trends, I think for any builder, anyone building AI, if I had to pick one thing to keep an eye on, I would say is AI agents. I think there's more than one thing we should keep an eye out on, but if I had to pick my top one, this might be it. Well, we should all be saying agents, agents, agents, but we won't. With that, I will leave the floor to you. Andrew's going to do a few remarks. You'll all love hearing from him. As I said, this is an incredible privilege for me to have Andrew and the other amazing guests that are going to be here. I hope all of you have a lot of fun listening to him, learning from him, asking questions, and of course, doing cool things yourself. Thank you. And I just say, I really want to thank Street and the whole Snowflake team. My team, Landing AI, building Landing Lens as a native app on Snowflake, really thinking about how to hopefully do more things with Cortex. It's been such a brilliant platform. We are super excited to be working with you and your team. Thank you. Congratulations. Thank you. Good luck. So, because this is a developer conference, I want to take this opportunity to share with you some things about AI agents I'm excited about. And I want to share some things I've never presented before. So there will be new stuff here. So AI agents, right? What are they? Many of us are used to using large language models with what's called zero-shot prompting, and that means asking it to write an essay or write a response to a prompt. And that's a bit like, if you imagine going to a person and saying, could you please write an essay on topic X by typing from start to finish all in one go without ever using backspace? You know, And despite the difficulty of writing this way, I can't write that way, despite the difficulty of writing this way, LLMs do pretty well. In contrast, an agentic workflow is much more iterative. You may ask an LLM, please write an essay on a rather essay outline, and then ask, do you need to do any web research? If so, go search the web, fetch some info, then write the first draft, then read your draft to see if you can improve it, and then revise the draft. So, with an agentic workflow, it looks more like this, where the algorithm may do some thinking, do some research, then revise it, and do some more thinking, and this iterative loop actually results in a much better work product. And if you think of using agents to write code as well, today we tend to prompt an LLM, you know, write code, and that's like asking a developer, could you type out the program and have it just run, you know, from typing for the first to last character. And it works surprisingly well. But agentic workflows also allow it to work much better. So my team collected some data that was based on the coding benchmark called human eval. Human eval is a standard benchmark released by OpenAI a few years ago that gives coding puzzles like this. You know, give it a non-empty list of integers, return to some data, and that turns out to be the solution. And it turns out that GPT 3.5 on the evaluation metric, It turns out that GPT-3.5 on the evaluation metric, Parsec K, got it 48% right with zero shot prompting. Prompting it to just write out the code. GPT-4 does way better, 67% accurate. But it turns out that if you take GPT-3.5 and wrap it in the agentic workflow, it does much better. And so, and with GPT-4, that does also very well. And so, to me, one thing I hope you take away from this is while there was a huge improvement from GPT 3.5 to GPT-4, that improvement is actually dwarfed by the improvement from GPT 3.5 with an agentic workflow. And to all of you building applications, I think that this maybe suggests how much promise an agentic workflow has. So my team at Landing AI works on visual AI, and I want to share with you some late-breaking things. I've never presented this before. We just released this as open source a few days ago on what Amik said about building a vision agent. So the lead of this project, Dylan Layard, is an avid surfer, and so he looks a lot at shark videos. That's a shark, and these are, you know, a surface kind of swimming around. And Dylan was actually interested with videos like these, you know, how close do sharks get to surface? And this is a video generated, so generated video like this, shark is 6.07, 7.2 meters, 9.4. Now it's gone far enough away, so we switched the color from red to green in the surface more than 10 meters away from the shot. So if you were to write code to do this, you know, you run object detection, do some measures, find the boundary boxes, plot some stuff, like, you could do it, but it's kind of annoying. You know, it would take several hours to write code to do this. So I want to show you the way we built this video. Which was, we wrote a prompt. Can you detect any surfboards or shots in the video? Draw a green line between a shot and a surfboard. Assume 30 pixels is one meter. Mark the line red. Blah, blah, blah. This was the instruction given to the vision agent. Given this, DLM, you prompt, writes a set of instructions that breaks the task down into a sequence of steps. Extract frames by using the extract frames to and so on. This is a sequence of steps to do that task. After that, retrieve tools. Tools means function calls. So for example, save video'll save video as a utility function that saves the list, and then we retrieve a long description of the save video tool or the save video function. And similarly based on that, we end up generating code, fully automatically generated, that when run results in the video that you just saw. So I'd like to just dive a little bit deeper into how this works. So we built Vision Agent to work as follows. You input a prompt. This is a slightly simpler prompt than the one I used just now. But calculate the distance between the shock and the nearest circle. And the goal of our Vision Agent is to write code to carry out the task that you prompted it to so that you can then feed it a single image and have it generate the desired outcome. And similar to agentic workflows on writing non-image code, we find that this works much better than zero-shot prompting for many applications. Moreover, we found that for a lot of image users, for example, if in Snowflake you have 100,000 images, then having code that you can very efficiently run on a very large set of images is important too. Because once you have the code, you can take a large stack of images or video frames or whatever and run it through a relatively efficient piece of code to process and get the answers. And I want to share with you how our vision agent works. And it's open source. So take a look, give us feedback, maybe help us improve it. But the vision agent is built with two agents, a coder agent and also a tester agent. But with a prompt like this, a coder agent first runs a planner to build a plan that lists out the steps needed to complete the task. So, you know, load the image, use a tool to detect the object, calculate the distance and so on. And then it retrieves the detailed description of each of these tools. Tools means functions. And then finally generate the code. And I don't know if some of this seems a little bit too magical almost, but all the code is on GitHub. Take a look at it. Take a look at the specific prompts we use. You might be surprised when you look at the details how, you know, all of this stuff seems magical, maybe the first time. But look at the code and look at the prompts. And it turns out that when you do this, here are a few other demos. This says detect every person in this image, figure out if he's wearing a mask, open the Python dictionary. So here's a bunch of codes. Here's a Python dictionary. Eight people are masked. Two people are unmasked. Here's a different prompt to actually generate a visualization, plot the detections and so on. So this is a new piece of code, all automatically generated. And I actually missed the unmasked people, the object detection thing, found the unmasked people. One more example. Oh, this one's kind of fun. Analyze the video every two seconds, classify if there's contrast or not. I'll put JSON, you know, showing if there's contrast or not. So contrast videos are always, well, I don't think anyone was hurt. But 16-second video. It's coming. There's a car. Fortunately, no one was hurt, I think. And if you do that, here's the code on the right. And it processes the video and outputs the JSON showing, at this time stamp, there was no car crash. At this time stamp, there was a car crash. And so the feedback I'm hearing from quite a lot of people, from my internal team and some users is, yeah, I could have written the code myself, but it would have taken me a few hours and you can now get this done. I find that in computer vision, we use lots of different functions. And honestly, I can never remember, right, what functions to use, what's the syntax. And this really makes the process of building visual AI applications much easier when it works. And I want to share just one other thing that makes the performance better, which is use a test agent. So I showed you the code agent. And it turns out that you can prompt an LLM to say, write some tests for this, or write a test code. And based on that, it can prompt an LLM to say write some tests for this or write a test code. And based on that, it can execute the test code. Right now, our test code is often type checking. So it's a little bit limited, frankly. But even with that, we can execute the test code. And if the test code fails, feed the output back to the code agent, have it do a reflection and rewrite the code. And this gives it a further performance boost. Oh, and I should say, in terms of academic literature, the two research papers that we counted on the most is the agent-coded paper by Huang et al. and then also the data interpreter paper by Hong et al. So take a look at those papers if you want to learn more about these techniques. And so just to show one last demo, this is Qt Detective Cost and Motivation Sensitivity every two seconds. We wanted it to highlight. So this is actually for CCTV videos, testily put together into a video. Common thing people want is to just highlight the interesting parts to look at. So long prompt, YouTube link. So it creates instructions like so, retrieves tools. It turns out the prompt, YouTube link. So it creates instructions, you know, like so. Retrieves tools. It turns out the code doesn't work. So the code, maybe I'll show you this one. The code actually fails a few times. Here when running it, there's an index error. Trace back. So it feeds all these error messages back to the LLM. Fails the second time. Fails the third time. It turns out the third time it fails, no module named PyTube. And so the last thing that fixes it is it's figured out, you know, to do pip install PyTube. And then this actually fixes it. Runs the code, and then you have this kind of highlighting in the CCTV agglomerated video which of the four videos has more than 10 vehicles in there you should look at. So I'm excited about agentic AI as a direction for many applications, including coding and vision and the visual agent, which is what we've been working on. Just to share some limitations, it's very, very far from working all the time. In our experiments, two failures, probably one of the most common failures, we use a generic object detection system, grounding dyno, that sometimes fails to detect objects. Here it missed a bunch of yellow tomatoes. Common failure. One of the things I was excited about, landing ice collaboration with Snowflake, we recently built landing lens, which is a supervised learning computer vision system as a Snowflake native app. But I think with supervised learning, we were able to mitigate some of these errors. And then, you know, it's not good at complex reasoning. So here, if you say each bird weighs half a kilogram, how much weight is on the fence? With this example, system naively detects all the birds, but doesn't realize that one of the birds is flying and won't put weight on the fence. But it turns out if you modify the prompt to say, ignore the flying birds, it actually gets it right. And I feel like today, Vision Agent, we've released it in beta. It sometimes works, sometimes doesn't work. It's a little bit finicky to the wording of the prompt. And sometimes you do need to tune the prompt to be more specific about the step-by-step process. So I wouldn't say this is brilliant, amazing software, but sometimes it works, and it works. I've been really delighted and amazed by the results. And I just want to mention, oh, hey, guys, you stand up. The team that built the Vision Agent is actually here today. Dylan is a surfer, Sandy is middle, and Asian, Shankar. So I hope you catch them. You'll learn more about this either here or at the landing AI booth. And it's also online at va.landing.ai. And it's also released a core engine as open source. And I feel like AI agents is very important, exciting trend. And we're making this small contribution to open source to hopefully help everyone. And I hope that together we can make agents much better. And this will significantly improve what we can all do as developers. So with that, let me say thank you all very much. Thank you. Let's see, so someone told me that we have a couple minutes. Oh, I think Lucas from Waste and Bias is coming on. I think we have a couple of minutes from Q&A. If people have a couple of questions, I'll take them quickly. And I should get off stage so you can hear from Waste and Bias. Thank you very much for giving us a very concrete example to explaining the workflow. Really appreciate it, Andrew. Thank you. And I have a very quick question regarding to agentic AI. So do you see, first the question is, other than vision agent, do you see agent can be used in other applications? That's number one uh the actual concrete in application the agents that's number one question number two would you say agent is just a sort of specialized ai while giving the language model or any other models we're having, it's more like a generic AI. Thank you. Thank you. Yeah. So thanks. So let's see. What I'm seeing is that AI agents are being used for many, many different applications. I feel like some of you may have seen the splash that Devon made on social media, although there are some discussion about the nature of that announcement. But this Open Devon is an open source code agent, and there's a lot of research on code agents. I'm seeing teams doing legal work, for example, analyzing complex legal documents, use agents to analyze complex legal documents. I think AI research agents, agents that go onto the internet, do a web search, synthesize lots of information, and write a document with deep research. That's really taken off. I feel like, you know, I actually play a lot, play around quite a lot, use agentic platforms like QAI, AutoGen, sometimes LangDraft, and I'm actually lots of people build lots of applications on top of these frameworks. And right now, I find that many agents tend to be built for a specific purpose. But it'd be interesting to see, you know, if there's a single, very general purpose agent. I think it's exciting. Oh, for a lot of agents, I think that we're just crossing the threshold from toy novelties to being useful. For example, AI research agents, right? I've been around for a long time, going to the Internet, do a web search, write a research paper for you. I think like three months ago, you know, it was great to play with, but just in the last couple of months, my friend Monica Lam from Stanford, her research lab released Storm as an open source software. You know, I feel like, yep, this is actually going to be useful. So I think just in the last few months, I've seen a lot of these applications cross-stream being fun novelties, being actually pretty darn useful. Now, I'll just take one more question, then I think I should get off stage. I probably... No, okay. All right, they're saying it all the time. So thank you all very much, and it's really nice seeing all of you. Thank you all very much. And it's really nice seeing all of you. Thank you.
Andrew Ng On AI Agentic Workflows And Their Potential For Driving AI Progress
1,854
Snowflake Developers
20240611
In this Luminary Talk given during Dev Day at Snowflake Summit 2024, Landing AI Founder and CEO Andrew Ng talks about AI agentic workflows and how they could drive more AI progress than even the next generation of foundation models. He describes major agentic workflow design patterns — such as reflection, tool use, planning, and multi-agent collaboration — and why they're a powerful tool for developing LLM-based applications. Introductory remarks are offered by Snowflake CEO Sridhar Ramaswamy. 00:00 -- Opening remarks by Snowflake CEO Sridhar Ramaswamy 06:09 -- Conversation between Sridhar and Landing AI Founder and CEO Andrew Ng 13:38 -- Presentation by Andrew Ng For more information about Dev Day, click here: https://www.snowflake.com/summit/devday/ To view the Opening Keynote, click here: https://www.youtube.com/watch?v=EF0QCOuVGV4 To view the Platform Keynote presentations, click here: https://www.youtube.com/watch?v=8EQpUv0U4Ac Subscribe for more! http://www.snowflake.com/YTsubscribe/ Explore sample code, download tools, and connect with peers: https://developers.snowflake.com/
2024-06-17T22:37:10.959693
https://www.youtube.com/watch?v=d4qB9xaPU-U
My name is Jacob Barhak. People know me in this meetup because I'm one of the organizers. But what I do outside this meetup is actually I'm a sole proprietor, and I develop disease modeling software and all sorts of other tools to comprehend medical data. This is a long-term journey. I've been doing it for over 10 years, and I'll be doing it for a long time. term journey. I've been doing it for over 10 years and I'll be doing it for a long time. And what I'm going to show you today is an offshoot of one of the other projects. My project represented here and other places in the past is a disease modeling foundation. I model COVID, I can model diabetes, but basically an infrastructure. The problem is how do you get data for disease models? Data for about diseases is kind of secure and unreachable. So one of the ways to get it is from publications, but then again, publications are all over. But a few years ago, there's something that appeared called clinicaltrials.gov. Actually it appeared because the government issued some laws that everyone has to report their clinical trials and has to go to some database. There are some nuances of exactly what that's going to go there, but basically the law says that you have to report your clinical trials. And there's a website called clinicaltrials.gov. I'm going to show you for a second how it looks like. This is the main page. This is the number of cons. This is updated to like an hour ago of how many research studies, studies include clinical trials, other type of studies. This is how much is there today. This number keeps on going and some of those have actual results, actual numbers. So there's a lot of data about clinical data that you can actually go and process and it's all out there publicly. The thing is it's not easy to process. Why? I'm gonna talk in in a second. So first, in this work, I work with Joshua Shrez. He's, Josh is, he's no longer doing anything active, but I kept his name because he actually wrote a website. I'm going to show you later. It's evolved since then, but he's done a lot of work here and he deserves the credit so this is why his name is here. There's other people who helped you'll see them in the acknowledgments at the end so don't go away without seeing the acknowledgments. Some people may recognize their name there from this group even. So let me tell you what I'm really trying to do and then go back to the clinical unit mapping because this is what this talk is all about. So I actually want that in the farther future, let's say 50 years from now, and then you can argue how close or fast slow it will be, I want your doctor to be here. I don't want to go to the doctor. I want the doctor to be with me all the time and I don't want to go to the doctor. I want the doctor to be with me all the time. And I don't want to argue whether I'm going to the best doctor or the correct type of doctor. I want the best one to be with me all the time, regardless of what I pick. And if every one of us has a doctor like this basically on our cell phones that's with us all the time, you don't even have to go to the doctor. The doctor is with you. on our cell phones that's with us all the time you don't even have to go to the doctor to go to the doctor is with you and if it's repli if you have the best one replicated many many times because it's easy to replicate the things that are electronic then we'll be much healthier in the future we who know how who knows how long we're gonna live with this thing but to get there we have to go through many many hoops it's a type of automation we still don't have and you still want to inspire to. So I looked at other sources of automation just to give you an idea of how reachable it is. I said about 50 years maybe I'm wrong but let's see let's see how other applications were actually human cognitive tasks. I'm not talking about your surgeon who actually has something to do with their hands. Think about the cognitive tasks that the doctor does by looking at your data while you meet them and they're looking at your charts and stuff like this and trying to figure out what to do with you. So let's start talking about other cognitive tasks that people used to do or still doing and how long it took the computers to solve them. So let's start with computer chess, just to give you an example. From the first computer program that was written, and you can check it out on Wikipedia, it was by Alan Turing in 1951. In 1997, less than 50 years, Deep Blue won against Kasparov. So it took about 50 years, even less. Actually, in the middle of this, you know, people are skeptic about AI, and now there are talks about how AI will hold and so on. But it's actually something that already happened, those skepticism. There was a grandmaster, chess master called David Levi. In 1969, he looked at the computer chess programs and said, nah, it's not going to work. I'm still going to beat it in 10 years. And he did. And there are quotes about what he did and not. But today, anyone will make this bet. He actually made the bet that won. And later on, he made the prize and he actually lost against the computer in, I think, 89 or something like that. But today, anyone will even think about competing against a computer chess program, a good one. Today, the leagues for computer chess, there's a separate league for computers and for humans. And apparently, humans cooperating with computers seem to be the best solution. So AI will probably go the same way. But this is an example from the past that how things will be and how much. Also, people are talking about driverless cars. Actually, some of them are already driving amongst us this way or another. are already driving amongst us this way or another we had this in this meetup we had our group presenting it and and some other companies that deal with uh self-drive driverless cars so the first experience in 1990 started in 1997. we are now about 45 years later and there are already driverless cars being tested and this chart is like I made it a few years ago. So 2018 Waymew commercialized. It's still active now and who knows how long it will become really dominant or some other car company. We're not that far from this. So why can't it happen with actual medical stuff? Well, we need to jump one more hoop that we did so far. So for computer chess, well, the problem is simple. All you need is basically computer power and some algorithms. It's relatively a simple task. For driverless cars, you need something more complicated. You need sensors on the car, you need some coordination, you need maps, basically data. And all of this is apparently, at least to some degree, seems to be solvable if people are solving it. In a few more years, we'll probably get it. We already have some of them, but the question is how usable they are. And I won't go into arguments about this but with uh medical data well we have another problem that people don't see yet i see it all the time and it's acute it's standardization of data medical data comes from all sorts of sources and once merged you have a problem. Here, let me show you how big the problem was. This is August last year. This is the amount of clinical trials that we had. About 425,000. You see, now we have 470. It's like a year and something later. The number grows in quickly and this number is going to grow further. About 55,000 had actual data like numbers. and computers are good with numbers like blood pressure, A1C level, height, weight, all sorts of numbers that you see that they measure. They had those publicly. how bad or how good it was standardized you find out that it was well not really standardized because you had in unique i'm talking about unique units of measure 36 000 units of measure the number is gone by now by the way but this is august last year okay now i actually counted it from a few years before and did the average of how many new units of measure added per day. Just a kind of measure of how much, how much variability or non-standardization there is. It's about 10 new units of measure per day on average. Think about it. Each day, all the sources reported data with different units of measure and about 10 of them. This was happening already. And this is all public. You can actually go and check this out. Maybe you will have a problem downloading a file from 10 years ago, but I have some that I can show you. But it's all possible to actually check and see how bad the standardization is you can see how many units of measure are there but if you follow the same thing that i did so what i did is i said okay no it's a problem there aren't that many units of measure maybe there are but i it doesn't seem if you look at the data you'll see many of those are like misspelled units or units that are the same but different abbreviation or someone added context so it did not make sense so what I did is I went said okay this these units have to be standardized why because then you will be able to compare different trials that just put them in one mix to each other or even feed them into some computer program that will analyze their data otherwise the data means nothing think about it the number seven doesn't mean a lot as a quantity computer understands computers understand numbers not quantities if you want the quantity you need the number and the unit if you want the quantity you need the number and the unit but seven kilograms written as kilograms like the entire world or seven kg the computer will not recognize the kg or the kilograms being different you have to make it such that it it will understand you're talking about the same thing so the same seven if you say seven grams or seven kilograms it's different for us for computer the seven is the same the grams or kilograms we don't know for us for computer the seven is the same the grams or kilograms we don't know he doesn't know what it is so what I did is I'm trying to standardize all the units so later you'll be able to convert them think about it as like a translation task between languages and the problem you have here is the Tower of Babel problem where everyone is speaking their own language. Like each type of unit, each type of unit, a clinical trial has their own language. And you now have to comprehend all the languages you want to extract data from all this database. So I took several existing unit standards or specifications. You can see in highlight here is IEEE. IEEE actually gave me permission to use this for no commercial purposes. Their data set. So you can go on my website and see some of their units at some point. But there are other sources like CDISC and Bio-Ontology and actually let me show you all those are my data sources clinicaltrials.gov public CDISC is a organization that deals with data standardization for medical purposes actually it's based in Austin check it out the headquarters are in Austin. NIST RT-MMS is a project by the National Institute of Standard Technology, NIST, and they actually cooperated with IEEE. And this is how I got the units. And Unit Anthology is a website which has data in it. Of course, I also added, I took also non-units because later for machine learning purposes I need non-units. But those are my data sources. And eventually, it all comes to centralized and clinical unitmapping.com, which is the website I have. I'm going to show you how it looks, the front page. You can actually go there and see this front page. And what I do is I take units from each of those. Now I have like 5,000 units that are standardized, and I'm trying to match all of the 36,000 units to those 5,000. The way you do it is for machine learning techniques. At this level, you need to have a machine to help you. It's impossible doing by hand, although a lot of it is done by hand. I did at least 12,000, 11,000 or 12,000 units by hand alone. We're going to talk about this later. But basically the website, what it does, it helps us find unit proximity through clustering and some other techniques. The website also helps the user label those units and map those. The labeling slash mapping is what it actually the website does and it allows you aids you doing this by using supervised machine learning which actually allows you you can ask the website what's the standard form of this or that human thing to give you an answer so just a snapshot of the how it looks like you can actually access i'm going to show it to you in a minute but a few important things about website it's multi-user multiple users can label and work on it in parallel. Users can see what others are doing and the system actually promotes some of the units that are being mapped to some point. It floats them up to things that look fairly mapped. It will give them priority and show you but it also gives you a drop-down of all sorts of other possibilities that you might think of. So many units you want to see all the close things together. So the website shows you this already. And of course, it highlights what is a standard unit according to this or that standard. And this can be expanded to many more. So these are the important things about it. But people here like to hear about machine learning. So let me give you an idea of how it looks like. Here you see, I think it's 25 or 30 units, the topmost units that are being used by clinicaltrials.gov. Look at the differences. You see percentage of patients and percentage of subjects is basically the same thing or percentage of participants. However, the system doesn't know that they are the same. Here, percentage of participants again, sometimes the big piece, sometimes the small piece, sometimes patients. All of these are the same thing and you want to map them all as percentage, the symbol percentage. Same thing here participants are participants big or small um so what you can do is you can extract just from the text and you can extract it with very simple uh techniques you can convert it into a vector here i'm showing how those vectors from all of those words how similar they are so you can actually if you just look at this portion forget about the words you can actually see that this and this and this and that have some similarities between them so they are kind of close so this is one thing you're trying to do in using unsupervised machine learning you want when the user actually works with those the labeler he wants to put them see them all together so what you do is uh and we're using unsupervised machine learning you want when the user actually works with those the labeler he wants to put them see them all together so what you do is you have to cluster those units uh here i'm showing i'm using holovis to actually demonstrate you see all 30 something thousand units here and you can see them with proximity to, like the x-axis and y-axis are the same unit. So the diagonal should be, and it shows you the proximity between 0 and 1. The diagonal should be highlighted. And indeed, it's highlighted, the fact that it's not exactly an angle because you have auxiliary units here. I won't go in here. But now, this is unsorted. This is where things are not in the clusters this is similar to this where participants and percentage of participants you see you find it in places one three and maybe 20 something and all of those however are very similar so using this cosine similarity, you can actually show how close they are. And you can do it before you cluster them and after. On the right side you see after clustering. And you can see things are much more highlighted near the diagonals. This is how much, this is the way I'm showing you order visually. It's too much to even zoom in, but I'm able to show it to you and show you that this is how you order the data. And after ordering the data, you can actually put it in your website and then all the units become clusters. Actually, let me show it to you. So here you see, this is cluster zero. all of those things look like percentage of something and indeed you see all the a1c's we're going to talk about that later one close to each other maybe not always is but something else but here in cluster 8 you'll see things that look total load for a second and does it's quite a heavy lift for the machine. It's a fairly machine, but now you can see cells or copies per million cells. And you see all of those things, they look totally different than the A1Cs that we saw before. And of course, you can go into things that look like even different. So you have different clusters of things. They're kind of similar, but not really. So this is what the clustering does. It just orders everything. So users can actually look at them and see similar things next to each other. And then you can say, oh, this and this are the same. They're just written differently. And this is what this website is for. It's actually to make some order about uh it orders the things and allows you to label later on so this is the first thing that we're doing is clustering now after you clustered everything you want to do one more thing you want the machine to tell you to tell you, now, you know, this unit that you just saw before, well, it should be written this way to be standardized. You want everyone to speak the same standardized language. Now, to do that, it's not enough to use unsupervised learning that didn't ask the human to do anything. You actually have to show it enough examples. It's called labeling. You label some of your data and you show it enough examples where you actually say this maps to X maps to Y, Z maps to A or so on and so forth. So this labeling processes human intensive, human has to do it, but you can still run an AI and see how it will work even without labeling by creating mock labels. So this is how I started working, created mock labels. Later on I actually did actual labeling. But I experimented with many technologies you'll see all the technologies and later on you can go and see the references of what technologies are used to actually figure out what's going on and how to use supervised machine learning supervisors because i'm telling it this unit should be like that you actually dictate someoneises and tells you how things should be. So I used many, many technologies and some of them you recognize some LSTM, CNNs, those are neural networks people use, but they used other types of technologies like learning to rank, even simple classification, which is the simple neural network I tried. The things that actually were accurate enough started to be a few years ago, it's all those sequence to sequence networks. Those were good, but they were still not as good. And everyone now talks about ChatGPT. Well, ChatGPT is based on technology called Transformers. Transformers was first published in 2017 um the paper uh that defined transformers uh actually brought it to the world is called attention is all you need and transformers have those attention layers in the neural network allows you to work better with sequences until then sequences were was a real problem LSTMs and all sorts of other recursive neural networks were suggested but transformers actually solved the problem so when transformers came you can see my accuracy jumped in significantly like 20-30 percent almost and this is been significant and now I was able to do the website with this technology before this the website was the older technology but now I actually could implement the supervised machine learning which I did last several years then the issue became labeling okay now I have the neural network that works in very accurate or fairly accurate enough for me to actually work with. Now I want to actually tell the machine, okay, enough examples of what's good and what's bad, like this unit maps to this, not what's bad, you just show it what's good, what's bad, you have to figure out some other ways. But I had to map all those units to the correct label. So labeling is a time-consuming task. I have 36,000 units. How do you do this? Well, within approximately a month I was able to do 12,000 units plus 5,000 became automatic because you have the synonyms of the standardized unit so about 17 000 units were labeled automatically out of those all together if you count it all uh you have like 40 something units 40 something thousand units plus so um to do this properly i use technology from data heroes here i'll show you their logo I use technology from Data Heroes. I'll show you their logo. It's a website. It's a company. They have a website and they actually have technology that you can use to help you reduce the amount of training for machine learning. However, I didn't use it to reduce my data set. What I did is actually kind of the opposite said no i want to figure out which labels i need to uh which units i need to label to reduce my load because the trick is once you label those the ai starts helping you so instead of labeling all 36 000 and then seeing how good it was you can just label a few and then the machine starts suggesting back telling you what's good what's bad like actually it shows you all it's good and you can have like multiple of those that shows you good options and then your labeling task becomes trivial almost you just click much less clicks and you do things much faster and this i was able to uh do all of those within the month and this was a busy month i was doing some other things this is like off of this i was able to uh do all of those within the month and this was a busy month i was doing some other things this is like off off hours i was doing this so um the technology they use in data heroes is called core sets and it's basically what they do is it's similar to clustering but not really it's kind of they build you a tree of similarities, and then it shows you in the upper level, it shows you all of the things that are kind of distinct. So if you map those, you know that you captured many of the labels that you need to label and don't repeat yourself many times. So it actually reduces the time to actually perform the labeling. You can do similar techniques with clustering, but they actually got it down to the point it's a tree, and you can do it in several levels. So there are some numbers that I used to actually train the neural network. I didn't label all the 30-something thousand units I have. What you see now on the website, it's only after it's being labeled by 17,000 units. And I'm not talking about the non-units. The non-units are many other units that you can actually hear. 400,000 non-units as well. So 7,000 units and 407 non-units is how you tell the machine, oh, this is not a unit. So don't translate it to a unit. Otherwise, the machine thinks everything is a unit. And we'll try to translate it to a unit, otherwise the machine thinks everything is a unit. And we'll try to translate everything to a unit. So eventually I got to the point that I reached about 3,000 labels, meaning those 36,000 units that I had initially became 3,000. This means it's less than the initial set of if you combine all of the unit standards all together which means now it's reasonable more but it's not the end of the work there's still some tidying more to do but at least the system now comprehends the units and the question is how good or how bad so when i train it i train it over about 45 000 records in in the so the train had 400,000 and the validate set in 45,000 records so now if you actually go and see the results this is how accurate it is and when you train you actually not only trained for the unit you also train the unit and the context so only the context in ask the system to infer the units this is really interesting because just to give you an idea of context. Context is like weight. Unit is kilograms or grams and notice weight can be measured in kilograms and grams but kilograms and grams are associated with weight or maybe some other words. So weight is the context, kilograms and grams are associated with weight or maybe some other words. So weight is the context, kilograms and grams are the units. So there is not a one-to-one connection, but still in many cases when you give the system the context, it will figure out the standardized unit, the correct one for you. So it's not always possible, but it's still it will try and the machine will do it for you. So you can train it in all sorts of variations I'm showing you the training accuracy and the graphs you'll show you it's about 90 something percent this is for the training set but we actually are more interested in about the validation set which actually shows us how accurate it was after it was trained. So you can see this is, we'll put the context only shouldn't be that accurate but it's 60, 2% of the time tells you if you are giving me only the context will give you the correct units 60% of the time but this is not a trivial task because it may not from context you cannot infer the unit always because it's a one-to-many connection but still 62 is still impressive but if you actually give it the unit or the context or the unit without the context it's about 86 accurate on the validation set this is pretty good in the non-units it's almost point on it recognizes all the non units immediately so you can check it out it's all written in Python I'm using all sorts of libraries that you know across the hood I'm going to show you a little bit of a demo but first I want to summarize and tells you are this technology that I'm showing you I'm showing you four units of measure but it can be done for many other things that need standardization I'll give you just I'm showing you for units of measure, but it can be done for many other things that need standardization. I'll give you just some example. Cities in the world, how they are spelled in different languages. Boom, you can do this with this fairly simple. You just the database, you just change to instead of units to cities of the world or something else. So anything that needs standardization and I'm not, it's not aimed at really translating between languages but for standardization purposes it's really good. It will recognize medical units even if misspelled now. It will comprehend in the future medical units because it knows if you misspell things say oh this is this unit I know what it is because you have all all data and in the future i want to be able to comprehend quantities and numbers associated with you basically it's a it's it's something like a translation a a a conversion engine but now much more powerful because it's aimed at clinical data which is much tougher than other conversions that you see so but you can apply the same technology to other uh to other fields apparently they have the same problem in other sciences not necessarily medical and there's some solutions out there but this is aimed specifically in using the best technology available today transformers to actually uh go and do this so this is the acknowledging page i'll leave it for a second but sometimes some people will probably recognize their names there so if you do be happy you helped me quite a lot uh this way or another there's some companies that actually help you with all sorts of visualization, data heroes, also with the tool, and they will very helpful provide additional information. Let me show you the actual website. I already showed you the unit mapping here, but show you how bad the problem is. HVA1C is a measure of diabetes. It's measured in percent. See how many HVAA1c's are there? All of those are HbA1c's. HbA1c and A1c are the same thing. A1c. Notice, people write it the same thing. Once with a big C, once with a small c. Sometimes like this, sometimes like that. All of these are the same, and all of these should be inferred into percent, which you see here. This, what you see here, the possible synonyms, this is what the system knows after it's been trained on an AI. And all of those are actually percent, but it will also click you and you want something else. Let's say in my world, I want it to be something different and you can train it to be something different so you can just click here and then later the machine learning will kick in and learn it so all of this is possible and this is what this website is um i can give you more examples it's it's it's endless how many how many things uh work or don't here, but I'm going to show you just a few more clusters just so you get an idea how bad the problem is. So you can see those are more complex units, and this is like here. I didn't know exactly what is the standardized form, so I wrote several options, but they are the same, and they repeat themselves many times. So the system will know this will translate to this or that or that. In the future, we can decide. Notice there's also a problem with special characters. I didn't talk about that, but you see this dot? It's actually a Unicode character that someone plugged in, and therefore it looks different than a different unit that's approximately the same. And you can see many times it's also almost the same unit, when it looks differently. Here the AI is able to infer. This is the AI working now. I'm showing you the topmost AI. You want other options, these are similar other options the system can suggest to you if the user wants something else. We can go on and on, but let me show you actually the infer part. If you cross the infer in this way, you can actually ask the system here this is percent i misspelled percent on purpose the red line is if i asked it to infer the unit it will be able to it will match it to the percent this is what i trained it to do But if you want something like here, something simple in BMI, BMI is the context, it's not the unit. I'm asking it what's the unit of BMI? The body mass index. And it does give me the correct unit for BMI in the standard form I chose. Later on, we can choose different standard forms and teach them but the system already knows things and this is the level of development currently what you see here is a better version it's after about 17 000 units that I trained I intend to expand it quite a bit so check it out in the next several years if you want to see other publications that I published quite a lot, you can see throughout the years I started first publishing it in 2019 or 18, but you can see things that I have done in the last several years, publications and so on and so forth. This specific presentation I'm giving serially every once in a while. Today I'm doing it here because we had an open slot that i needed to fill out but i'm giving it in many other places around the world and hopefully this project will catch up and hopefully in a few years we'll have standardized medical data and then the road towards computers comprehending medicine will be much shorter than it is now. This is my uptake, those are my ideas, I stand behind them alone. I do have, I have to show you that, a conflict of interest statement. I'm developing it on my own funds but I am every once in a while I have some, I work with some others and therefore all those conflict of interest I write down so you can actually check it out. You can find this presentation on GitHub and previous others if you want. I am done talking. If you have any questions, I'm happy to answer. I do see Jeff there jeff is in the audience so yeah there's also the acknowledgements yeah yeah that's what i'm saying like i saw jeff yeah so it sounds like jeff did some work there yeah well jeff he helped me quite a bit he actually hosted the computers at some point yeah yeah yeah yeah we we me quite a bit he actually hosted the computers at some point yeah yeah yeah yeah we we it's a long story but like he hosted and he helped in other ways uh there are many people here i'll show the acknowledgments again maybe people recognize several people some of many of them are from like uh um austin and you might recognize them Austin and you might recognize them. They gave some ideas here and there, like here James Bednow you probably recognize, he gave talks with us. Yeah, they helped me with the visualization of this. The visualization was quite tough and I wanted to show what's going on. So I do have a question. So I don't understand the sources there, how it looks like the like the because I never looked at the clinical data. So when you say clinical data is just like tables, tables, rows, like white papers. It's not like, let me show you actually, it's part of the website. So let's see if it's still if this part still works, because sometimes they change things. They actually changed clinicaltrials.gov not long ago. And wait, we'll just press the other button. Here, let's go to mapping again. Give it a second. It's quite a heavy load to live all of the suggestions for one page so this is why it takes long and then the machine i have now is not like the strongest if someone knows how to enjoy this website easily then and if we're cheap then talk to me yeah like how clinical style it looks like and what's the yeah yeah so here this is this you see this HPA1C? The website actually, for the purposes of actually labeling, you need to be able to see what's going on. Why is this unit being used? You actually can see this is used in those clinical trials. You see, NTCT, blah, blah, blah. This is the clinical trial. And if you click here, it will actually go to the clinical trial. this is how it looks like uh there's a lot of information here i'm not sure this is the old version of this the new version maybe it's the new version i don't see classic clinical trials so this is the new version of the clinical trials.gov you see all of those tables this This is a huge hierarchy. I keep you know, the XML file of clinical shards of blood is complex. I won't go there, but you can actually go and find the HBA1C, what was it? Let's just A1C. Here, change in A1C for baseline. Here, here's the unit of measure that it's used. So you can actually find out exactly, and the number associated with this is this. And this is a table, and this is only for one outcome. There are multiple outcomes per trial. Here, this is another outcome. Change in blood glucose baseline. And notice that the unit here is different. So each one of those clinical trials has multiple different units, and you have to traverse all of this knowledge and standardize it. And each one of those is speaking a different language. The contexts are different. This is a clinical trial or something else, but you cannot even compare them side by side today because the numbers will not match. Even if there is a conversion, sometimes there isn't, match even if there is a conversion sometimes there isn't but like if there is a conversion or if they even if you are using the same units you cannot even put them side by side you can even compare them for God's sake when you want to extract human day you want the machine to actually do complex stuff with them a human's gonna comprehend it if you put it side by side why would the machine be able to do this yeah that's a big problem huh yeah this is what i'm trying to solve yeah but think about it it's not only for clinical trials tomorrow it can be from here you're a big company that has many offices around the world and everyone is providing some report that you need for your whatever you're doing i don't know chemical something or or oil or whatever think about all of those sources of information each one sending your report that's different that tells you the story then you need to actually build a bigger picture for all of those smaller stories how do you do that machine learning can help you and this is what I'm using it for. I'm here only doing the small things of like figuring out the units. Once I figure out the units, you can go to the next level and then the next level and then next level. Eventually, you're able to merge it with the disease models that I'm doing, which is a different type of AI altogether. So it's a long journey. Yeah, that's, this is a good problem solved for sure. Oh, thank you. David is happy. Yeah, David, I'm also happy doing this project. And Jared, oh, Greg. Oh, thank you. Go guys. No, that's nice. So the ultimate goal is like you mentioned something about like having everybody like a doctor in your pocket in a way, right? Long-term, it's maybe the end of my lifetime. I may not be the one who actually achieves it. And I'm working on a small part. But think about it. If all of us, all of the people who do Python or machine learning or something like that unite forces and actually solve small little problems like this, eventually all those solutions will come up to the point that this will be your doctor. You no longer go to your doctor. The doctor comes to you here. If there's a problem, it detects, it tells you, and then maybe a human doctor or maybe a drone with the medicine comes to you. Think about it. It's a whole different approach. Here, I'm taking care of my old mom now. You know what the biggest mess is actually? It's driving to the doctor. My mom is immobile now. So even the mechanics of running down, bringing mom to the correct doctor in the correct time that they're available, only this is a logistical problem, then on its own it's a mess. And I'm not talking about the fact that the paperwork doesn't even follow the person. The problem, this problem hasn't been solved. There is Teladoc. I don't know if you tried Teladoc, but in cases where you just need prescriptions, you can stay put and just cam for it. There's a technique. I'm unfamiliar with this, but I'm currently outside the US. So it's like maybe US solutions will help me. My mom is not. It would work over the internet. So in theory, it could work from anywhere. Yeah, the question is, do they serve someone from out of state? I just gave you an example. But it's also a different problem, because different states will have different regulations. But think about it. If you achieve making one good digital doctor that actually does things, you can replicate it. Think about the years of study you have to go to train one human doctor. Right. So another thing you're sort you're aiming for I think is like Star Trek's little whole thing that the doctor held in his hand right they would scan the body and then could diagnose things oh this is still science fiction this is maybe a hundred years for now but yes the idea is you don't need to train the doctors the way they are done today for medical school you still need those for some cases like surgeons that do something good and some humans just make decisions better better than machines because sometimes the machines will have no data and you don't know whether they're going machines will not do anything without data and sometimes humans have to make decisions where like on things that they've never seen before and no one else saw before. So maybe here you'll need human doctors or some other ethical thing or we're far away from that. But... Well, but actually I have a colleague that has created a mobile MRI that's the size of a briefcase and they use it for like bomb detection and that kind of stuff that also do soft bodies as well so if you combine that with your ideas then you could have you know your ideas are related to the knowledge component of the analysis etc right yeah the physical technologies going around too yeah so think about instead of coming to the hospital we actually have all the facilities you have all those devices that are sent to your home and the person just had to follow a set of like instructions look some devices already have like some watches will will measure your vitals and will keep them so and then you can able to analyze them and then the companies who hold them maybe hold their data and then they came able to analyze that that's a whole different story but like measuring your vitals is one problem but then again assuming you have data already what do you this what does the machine can do and decide how do you get the knowledge to do this because think about it medical medical data is very secure. If I'm a hospital, I'm not allowed to release data on you. But clinical trials are publicly open. And this dataset is something that you can use. And even now you can use and learn from it. If you are able to do something with this dataset, then tomorrow you will be able to do something by merging other data and there are techniques that actually doing things in other like there are many many more techniques that need to be used to actually solve those problems that are talking about i'm i'm talking i'm i'm working on a tiny bit of the something tiny that will eventually if it merged with many other solutions, will eventually be this thing that I'm talking about. I'm one person, but if many people join their forces and they do it properly, without, you know, and able to share it in ways that actually advance the ability to actually do this then in perhaps i actually asked some people when it will happen that computers will replace doctors so the doctors actually say it's already happening they call it dr google apparently dr google is an adversary to real life doctors because every patient comes to them already checked on google what's happening or some other website and then and then and there are other all sorts of problems of this it's not i'm not sure it's a good idea to recommend this or not but like it's a fact that people use it doctor google exists kind of it's it yeah it's just a blur doctor google is super here but like because they they say oh it's doctor good the one who gave you advice but no it's just dr google here but like because they they say oh it's dr good the one who gave you advice but no it's yeah this real time like i went to a doctor and then the doctor went out of the room and then when i poked my uh my head over like on the door the doctor was on webmd checking to see like what what's going on with me so So I could have checked with WebMD myself and not paid the money. MARCO TULIO PIRESI- By the way, if you go out to all of the people who are doing machine learning now, you know how many of them are actually talking about using medical data, answering medical questions? I just talked to someone on this Saturday on AI Mac, a maker space and someone is still doing that and how much time do you think that some people will create those bots that are actually pretty good and it will become a riot and people will start using them? Yeah I was just in ICU yesterday looking at my own information and um you know informed the doctors and they were like oh yeah that's probably what it was so yeah that was using a chat gpt board uh you see so look it's endless and the question is when it will become good enough so people use it so some people the doctor said it's endless and the question is when it will become good enough so people use it so some people the doctor said it's already happening but then you ask the ai people say and they tell you no some parts are impossible to solve almost like the more person is technical the less they see the solutions that the doctors think already out there this is what my i did it only it's not a like a huge study i just asked a bunch of people. You can actually find this in the talk I gave in in PyData and in PyCon Israel about this project. So it's all in the About page of my here. I'll show you the About page again. You can find it all and exactly what questions they're asked and so on and so forth. So you can find them in either this PyCon Israel talk or PyData Austin talk. People ask things and people think about the same thing about the ability of the machine to do things. But one of the answers is pretty smart um go and look at this it's it's it's it's a it's a nice thing but this is my uptake of it it will happen whether in 50 years or 40 or 60 or maybe 100 or maybe 30 or maybe even 10 i don't know but it's gonna happen the question is do we want how do we want it to happen and how are we going to reach it? I've been working on this for five years. I'm pretty sure that what I have now is a good solution because I am using the latest technologies. Up to at least a year ago or something like that. Today there are even newer technologies that are coming up pretty fast, but it's still based on transformers. Until something better than transformers comes around, this is the best technology around there. As far as I know. Any more questions? Let's check the chat.... chat uh okay jared do trials themselves actually introduce new units of measure like there is no authority for them like iso or healthcare industry groups well jared the thing is clinicaltrials.gov the website i showed you is where all the data from all the people who conduct clinical trials goes into by law now uh a few years ago they actually gave me access to the obey to where they enter data so I can actually look at it. So just so I can look at see how it goes in. So they do standardize the data. They do check the data that comes in for errors. However, it's impossible to actually do it properly because each trial that plugs in the data plugs it the way they want to. Many times they just copy from reports they publish in this journal or some other report that they publish elsewhere or some PDF or whatever it is. And they just plug it in, copying it almost verbatim. And apparently each one of them uses a different language. Now, should they standardize? Yes, I think so. I think this is how it's supposed to be. But if you look at the reality, reality is no. I'll tell you even worse. I was in a patent lawyer convention. The patent lawyers tell them, you shouldn't put everything on clinicaltrials.gov. This was made publicly in the patent because each one of the people who put data in clinicaltrials.gov, not each one of those, but each one of the entities, they may have other interest about what they want to share and how. so we're talking about a lot of diversity here and when you have this level of diversity you get to the point of uh yeah the tower of babel because even everyone may be slightly different than the other everyone is individual but then when you get a bunch of individuals each speaking their own language, you get the Tower of Babel like I just showed you here. It's not working. Let's see if there's any more questions. Yeah, David Hirsch mentions Nomad City. Yes, it's one civilization. There's a sphere, a fire, actually, it's called fire. Fire, SNOMED CT, there's all sorts of like standards. There's a bunch of them, David. But apparently, when you have multiple standards, the question is also what's the standard? So you want to be able to translate between standards or have standard of standards or the ones that you want to use. And you want the machine to do it for you. This is what this project can help you do. So if your organization, you want to use this standard, yes, you can train the machine to translate to that standard. It's just a matter of how much work you can put in this. Can you maybe talk to the government and use your tool to actually help with the input so the data can become standardized? Well, actually, sure. I'm a business. I will reveal this part. I actually, last May or June, I was in DC area and met with the former head of clinicaltrucks.gov. He connected me to the person who deals with the AI there, and I offered them the tool. They were impressed by the accuracy, but they said they had their own AI. JOHN MUELLER JR.: Hm. Yeah, he doesn't. GERHARD HOFFSTAEDTERNENGER- Or at least this is what I remember from the talk. This is what I was told. I don't know how fast or slow government will actually implement something whether it will be better or worse i can tell you that there is another government project uh led by nist that actually also made contact and showed them but then again they don't work with sole proprietors in that uh call it's like there was a funding opportunity and they actually excluded sole proprietors so if someone wants to build a business in sole proprietor and write a grant then talk to me. It's like they already started the team that team is about three years behind me and don't use the technology that I have. They are not the spheres you know they are not as technological savvy as I am. Look, I did this on my own with Josh's help and some advice from multiple people that you saw the names there. Everyone contributed like an advice or use this or try this or here's some data and stuff like that. These are the people that helped and I did it alone. So AI and computer, it's like computer chess. If you have AI plus humans, it's better than either computer alone or human alone. You have to have this combination. But then you don't have to invest so many resources. But that's my take on it. Yeah, it seems like it's an input problem, so if you can solve the problem with the source, then you don't have to deal with it. Yeah, you just solve one problem of clinicaltrials.gov. I won't tell you the continuation of this project publicly, some of it I shouldn't reveal, because I do want to eventually monetize it. I did put the conflict interest statement and so on i do want people to actually uh i don't want to profit from this project eventually i did put five years of work in it it's not just an open source thing that you do something you invest a lot of work in it years of work um so i can tell you that there are other applications even if you stop the leak in this source, how many more leaks like this do you have? Think about it. Yeah, I don't know. You have a problem in clinicaltrials.gov which is very specific, but okay, you stopped it there. How many more sources of data you have? Yeah. Medical data that you can actually deal with. Won't they have the same problem? I'm pretty sure they will have. Yeah, I mean, it seems to be like, solution could be pretty universal to many things. And I told you it can be applied to other standardization tasks, such as I gave you a simple example, cities in the world. But if you work in places where you a simple example, cities in the world, but if you work in places where you have lots of sources of data coming together, streams of data, they look, even if they put in the same table, you will see that the minute you have multiple sources, you have a standardization problem, this can help. It's not a foolproof solution nothing but this technology can help even if slightly very slight changes sometimes okay uh i hope i didn't say i don't see him anymore do you see any more questions or does someone has do you see any more questions or does someone has don't see any more questions there i see tom is saying something it's pretty amazing such conceptually simple issue like standardization of machine data formatting requires so much work. Well, yes. The problem is tough when you have a lot of data, Tom. Yeah, but once you've did the work, then, and you have a machine to help you, then it becomes easier. The more I started working in the system, the more easier it became. I believe that the last several few uh last units i have to map will take me much less than it took me so far just because i have the machine now to help me it's already pretty much standardized and then of course there'll be level of standardization and some other variations in this but i i won't tell you the future of this project so touch me in a year or two, I'll tell you how it was. Yeah, Oracle paid 28, yeah, I see that people say that Oracle paid a lot of money. Yeah, there's a lot of money in healthcare, but apparently I don't see a lot of it. If someone has connections, connect me this this project is looking for someone big to swallow it it is silas what do you say ah silas is bringing some sketch yeah just link into it to the oh yeah just link to one of the XKCDs because you mentioned the problem of standards. And then somebody comes up with a brilliant standard. It's like, OK, well, now we have a new problem. We've got to interoperate between these new standards. Just a different idea, but it reminded me of that XKCD of the problem of somebody wants to fix the standard problem. And then all they did was create the n plus one standard they have to do yeah well i that's a problem that anyone has all the time uh tom says something i know it very well you used to pass requirement data for feed large software model the human creativity in writing some similar requirements was amazing that was pre AI days well Tom now you have AI that you can teach to actually figure out things so why not use it all of those a eyes that people are using in a very excited about why not put it to use in some good examples this is one example I've been doing it for five years but really the last several years were a jump and i believe the next several years will also have very useful technologies that will get dissolved Okay, I'm not for... David is writing something about Beowulf. The hell is Beowulf? I'm not familiar with this, but I'm not sure exactly. I'll check it out later. Unless you want to say something about it, then I'll just check it out later. Unless anyone has more questions, can we stop the recording? No one's answering? Is it everyone's quiet or what happened? Maybe there's no questions. Okay, no questions. Then guys, thank you very much for listening to this. All right. Thank you, Jake. It was a great presentation. Thank you. Thanks, Jacob. Thanks, Jacob.
Jacob Barhak - ClinicalUnitMapping.Com Takes a Small Step Towards Machine Comprehension of Clinical
3,647
Austin Python Meetup
20240123
Jacob Barhak - ClinicalUnitMapping.Com Takes a Small Step Towards Machine Comprehension of Clinical Trial Data Session: Clinical Trial data is not standardized and numerical data cannot be comprehended since the units are not standardized. ClinicalUnitMapping.com is a web tool constructed to help standardize this data and merge it with the following standards and specifications: UCUM, RTMMS / IEEE 11073-10101, BIOUO, and CDISC. IEEE 11073-10101 - Adapted and reprinted with permission from IEEE. Copyright IEEE 2019. All rights reserved. The intention is to unify unit standards and machine learning tools, so that clinical trials data would become machine comprehensible. Presenter: Jacob Barhak Is an independent Computational Disease Modeler focusing on machine comprehension of clinical data. He is one of the organizers of the Austin Python meetup.
2024-06-19T10:27:46.107335
https://www.youtube.com/watch?v=s9L-qFF84Ew
The reference model for disease progression explaining COVID-19. Computational modeling allows researchers to simulate and study complex systems including disease at multiple levels powered by significant achievements in computing power and software. Dr. Jacob Barhak is an independent computational disease modeler. He draws on the multidisciplinary expertise to help machines comprehend healthcare. The extensively published and patented reference model for disease progression was developed by Dr. Barhak in 2012, initially for application to diabetes. Dr. Barhak observed that while there were many models of this chronic disease, not all were good representations of the real-world clinical data, that is, what actually happens with patients. His approach to this problem was to create a league of disease models and validate these using publicly available results from clinical trials, allowing a test of how well the models and assumptions fit the existing data. Importantly, the reference model is an ensemble model, meaning that it can assemble lots of different models and compare them, assessing their credibility. As such, it is also a fitness engine in which the models and combinations of these are ranked according to how they fit the real world's data. As further information is added to the reference model, it accumulates this additional knowledge, meaning that it is constantly evolving. At the start of the COVID-19 pandemic in 2020, Dr. Barhak saw the potential of the reference model to improve our understanding of this highly infectious disease, and adapted his previous work to create the first multi-scale ensemble model for COVID-19. As the name suggests, it uses multiple models describing cells and organs, individuals and populations. The COVID-19 model incorporates models of infectiousness, the level of infectiousness of each individual from the time of infection. The models of transmission reflect the probability of contracting the disease through interactions with infected individuals. The models of response reflect the behavioural choices of each individual affecting the number of their interactions with others in light of the pandemic. Several mortality models incorporate the probability of dying from COVID-19 by age and time of death since infection in days. The recovery model defines the condition of recovery. Finally, observation models are implemented to correct for the fact that the reported numbers are not always accurate. Dr Barhak is using publicly available data and testing how these can be explained using the reference model. To date, the disease is not yet fully explained and his ongoing work is dedicated to refining and improving the approach to better explain the existing COVID-19 data. If successful, the reference model would provide a valuable tool to better comprehend new pandemics and more effectively inform public health interventions.
The Reference Model for Disease Progression: Explaining COVID-19
227
SciTube
20230517
Computational modelling allows researchers to simulate and study complex systems – including disease – at multiple levels, powered by significant achievements in computing power and software. Dr Jacob Barhak is an independent Computational Disease Modeller. He draws on his multidisciplinary expertise to help machines comprehend healthcare.
2024-06-19T10:48:54.148631
https://www.youtube.com/watch?v=1M645o5gWrc
Okay, so I'm very pleased to introduce our speaker today, Dr. Jacob Barhawk. Okay, Jacob Barhawk is an independent computational disease modeler focusing on machine comprehension of clinical data. The reference model for disease progression was self-developed by Dr. Barhak. The reference model is the most validated diabetes model known worldwide and also the first COVID-19 multi-scale ensemble model. His efforts include standardizing clinical data through clinicalunitmapping.com, and he is the developer of the micro simulation tool or MIST. Dr. Barhak has a diverse international background in engineering and computing science. He is active within the Python community and organizes the Austin evening of Python coding meetup. For more information, check back onto the meetup page and he has a link. Okay. Okay, can I start? Yeah, okay, I'll pass it off to Dr. Barhak and then, yeah, any questions you can feel free to interrupt. Yeah. So I'm going to talk about the reference model for disease progression, but now for COVID-19. The presentation itself is interactive. You can actually go and if you want, you can, there's a QR code that will lead you to it. You will see it says chronic disease and infectious diseases in Paris, France last month. Well, this was the keynote there in that conference um and i'm going to repeat more or less the same presentation but hopefully it will be more interactive and you can ask questions and i will add a few more things just to absorb what what it's the work is all about so this is the model and it explains what it is it's in cmtk.org this is where modelers one of the sites where models put the models on or advertise their models and there's a lot of information about it and historically i'm going to go from back this is how the reference model started. What is the reference model? I was modeling diabetes and noticed there are many, many models and many populations, and the models are not always good for each population. So I started by making them compete. And this matrix kind of shows models against populations and shows you what's good, what's bad. This was 2014, and then it's actually the reference point start 2012, but the first 2014 or 16 are still doing the matrix. In 17, I started making the models collaborate. This is how diabetes model looks like the diagram I'm going to show it later. And within time, the diabetes model grew up to the point that in 2020 it was the most validated diabetes model that was ever made. I started adding more and more information into it. And then came the pandemic. 2019, I presented in the Condacon. You can see there's a good presentation that shows what it is today. And more or less, this is where things were in 2020. And then the pandemic hit. You can see Harry. He's the head of Midas these days my this is a group that organizes all the people the deal model infectious diseases and you can see in 2021 I already presented something there and this is what I'm going to tell you about what I present I actually created the model for COVID so let me show you how things were before this is diabetes model this is what was there before this is before and then uh this is how diabetes model results look like clinical trials where you can see the names of names or acronyms of clinical trials if all clinical trials almost have acronyms. Sometimes they become words, those acronyms. And then you can see some results. So this is the diabetes model that was the reference model. And what it does, it makes many, many models like MI models, stroke models, and mortality models compete. This is what was in 2020. Actually, this was supposed to be presented at the NIH, but then pandemic hit and all was canceled, meaning all conferences were canceled, everything became online. So this is a remnant of what was supposed to be a poster there and never and it became something online. And now when the pandemic hit people started asking me oh you're a computational disease model what you have to do with the say about COVID and at that point not much because chronic diseases are kind of different than infectious diseases. In chronic diseases the person has a disease and basically interaction with other people doesn't change anything. It's like the disease progresses in that person. So the modeling is kind of different because there are interactions between people. In COVID or infectious diseases, one person infects another. Still the disease progresses within that person but still there can be reinfection and stuff like that oh my tools were not ready for that it took me a few months to actually change the tools to be able to adapt to uh infectious diseases and today and then i created uh i published in curious the initial use case this is the publication you can find it this was in july in july 2020. then later i got some more uh information i got some support for computing power my model takes a lot of computing power we'll talk about it a little bit later and uh i published an interactive paper again i'm using holovist technology for all the interactive stuff you'll see so this is an interactive paper you can find it either through this eoi at the top or you can just google it's on my github account here and uh this is where uh the reference model was created the multi-scale ensemble model so there aren't that many ensemble models from a covenant team but none of them is multi-scale i'm going to talk about it in this during the talk why is it a little bit different and it is different than other ensembles you found in the past, because in this ensemble, the idea is for the model to actually understand what's going on. My purpose in working, and this is what the talk will be today, is I'm trying to apply the reference model to explain the data that I saw in the US. Many models try to forecast what will be in the future, they call it predict, but you cannot really predict and forecasting is very hard, especially if you don't understand all the things that you're modeling. So what I'm trying to do is I'm trying to take intelligible models, models that you can explain what they are and merge them together and using machine learning techniques and a lot of computing power put them together into something that you can later explain someone what's going on with covet 19 in ways that humans can understand it and i'm showing it visually i'm using all of this technology to show this. So let's dive into COVID. Unless you have questions by now, which you just interrupt me. If we're talking about the activity, if you hover in this presentation, you can press tabs, you can hover over things all of this is only written in Python everything I do is Python fuel Python so eventually it becomes an HTML file but it's all Python so you can actually find references and see exactly what information I use and if you click on them it will take you to that reference. But let's talk about COVID. This is a diagram of COVID. And I'm trying to explain COVID in the US, the numbers that were reported by the COVID tracking project. If someone remembers in the beginning, there was a project that reported infections and deaths in each US state, and they were pretty good and they were pretty good about it. They had the best data I could find at the beginning of the pandemic. So when people started bugging me about what about COVID, then I started looking for data. So I'll give them a good recommendation, the COVID tracking. i can actually still probably find something online they allowed me to use their data and what i'm trying to do is i'm trying to figure out how people move from covered to infected and from covered to recovered or death using all sorts of models and each one of those words here in purple this is a whole group of models infectiousness i have multiple models transmissions i have multiple models response responses response to the pandemic it's behavioral models recovery well notice the few things about recovery i'm not modeling the error back of people who recover get reinfected. Why? Because it's still early in the pandemic. I'm modeling April 2020. There aren't that many recovered people that you can actually – that this error will make sense, the back error from recovered to infected. But later on we can add this. But still I'm trying to explain the beginning of the pandemic. And there are, of course, mortality models models there can be multiple moving things and beyond everything there's an observer model an observer model is the model that corrects all the inaccuracies that or actually it kind of distorts all the things like they were supported in real life like people were claiming that there were claiming that there is infections are not reported wrong or the deaths are not reported correctly. So the observer model distorts things the same way that they were distorted in reality. And there are multiple of those models because there are assumptions. One basic thing is that the model is not always true. The model is always an assumption and we're trying to figure out the best best set of assumptions that actually fits the data and the data comes from multiple sources project i told you about but there's also uh census data from the us there is also papers that extract the information from either data or models. And finally, I even added temperature data of the US states I took from NOAA. But there's many, many models that came from different sources and throughout the presentation, you'll see people contributing this or that model and you'll see a link to wherever they came from. So let's start talking about infectiousness. But before we start talking about infectiousness, let's tell you why am I modeling infectiousness. So at the beginning of the pandemic, maybe even now, I didn't check lately, but the Department of Homeland Security, DHS, released a document called the Master Question List about the pandemic. You can actually find it probably in archives. It was a government document that looked over what they know or think they know about the pandemic. They were quite accurate about what they think they know and they had categories of we know this, we think we know this, and these are the questions that we want to ask one of the questions and this is from a document from may 26 this document updated this question later disappeared they probably either figured out or figured out this question it's not what they want but in may 26 they asked the following question what is the average infectious period during which individuals can transmit the disease so So think about it. In the end of May, a few months into the pandemic, the government still didn't know the answer for that. A very good work. If you want the document that explains the pandemic, go and look at this document. They really did great work. I only can compliment them, but they still didn't know the answer. And partially because the answer is more complicated than the question they asked for. Here are some models that you can find on the Internet of people trying to model infectiousness. And I took the references you can find out here. Some of those models, I made some liberties with them. I made some corrections. Some of them i had to copy by hand but you can see the different people or different models sometimes it's like they took information about one infection sort of person sometimes it's models or models of disease and you see the models become different all of them here are normalized so so you will see if it actually they count according to the maximum shedding of the virus, which is considered one. You see the relative shedding of the virus for each day since infection. Day zero is their own infection, then you can see later how it progresses. So you can see if i change the infections as different people have different infectious curves which is one the correct or which is the average one that's the question the model will attempt to answer but what it will actually do it will take all of those into account because it's an ensemble it allows you to take multiple models and model them all together then um the next type of category of models or transmission models which ask you uh if you encounter a person what's the probability of actually uh and the person has covered what's the probability of you actually contracting that parent counter this is the number in percentages it may seem low and i'm actually well it was very surprised about it however um and in the beginning when the first publication i found that it was like 0.6 percent but but the numbers there were a little bit uh but the numbers there were a little bit underestimating the pandemic. So later on, I found out it's more about 2%, the numbers you see here. If you fit in higher numbers, then the numbers don't fit. The epidemic spreads too fast. So this has to do with multiple factors. This is only for one encounter, but remember, This has to do with multiple factors. This is only for one encounter, but remember, there are other things that affect the disease, like number of encounters, which is not part of the transmission model, but population density is. Some states have higher population that may affect the transmission. There's random elements like people coming out of state, and therefore they were not part of the state population, that may affect the transmission. There's random elements like people coming out of state and therefore they were not part of the state population, but they still affect people. So it's a random elements that you add. And finally, at some point I noticed that the temperature, sorry, some states had numbers that were way off and we figured maybe it was something to do with temperature. So we had the temperature because both biosurveillance and the human behavior affected by temperature, some colder states, some other states. So models two and five here, each row here is a model. So models two and five have elements that have to do with temperature effect. By the way, they contradict each other. So if they have the same influence, they should cancel each other out. So these are assumptions about how transmission happens. And we'll see later how those behave. But one thing, the element that is missing within the transmission model is the response model. A response model is human response to the pandemic. This means behavior, human behavior. So I'm modeling the response model, how many interactions people have compared to what they had before, how the number of interactions changes. How do you do this? Well, you can take estimators like Apple Mobility data. So Apple Mobility looks at how much people look at their phone and they click on the map app. And they record this and for the entire population of states, they provide you the information for each day so this is one estimator and i took two models that changed the variable that used that some changes model number three is for people and i believe you know people like this who ignored the pandemic said no i'm gonna not gonna change my behavior because of the pandemic i know a few people like that um and i heard the other people also saying so there are some people like this so this is model number three model number four and five this is eric ferguson uh actually this is me taking eric ferguson data eric ferguson montclair university he published information he did analysis of one state it's here in this reference what i did is uh later he sent me information from all states and say when did the state close whereas calls and bars and restaurants open or close a non-essential shop and so on and so forth and each state had different measures so this affects the behavior of people in different states so model four and five say people comply with those measures 50 percent of the time or 90 percent of the time those are two different behaviors and this will change the number of people you interact with to people only in your household or only or the amount of people you usually interact with so all of those are variations of how people behave and just like in society people behave differently each one of those models will represent a part of society when we model things. Finally, if we have infectious transmission and response model, we can actually calculate how many people get infected. But after they get infected, some people die. So mortality model is kind of complicated, how they are constructed. I'm going to show it to you visually. So this is how mortality models look like. This is a probability of dying according to age. According to your age, this is the person will die, but we don't know when the person will die. This is important when we start comparing numbers because the person starts getting infected, eventually they will die at some point if they die. I gathered some numbers, did some calculations and assume this is Gaussian and or seven or some variation of Gaussian and this is when people will die so this is for multiple sources however this is not a good enough model I want actually to know at any day what's the probability of a person dying in this case this is where it came filipo castiglione actually made my model uh not only him but his example was very good made my model and multi-scale ensemble model what's a multi-scale ensemble what's multi-scale multi-scale is when you model what happens at the cell level or even if you all this before this at the molecular level in the cell level at the organ level then in the human level one the person level and then at the population level so you have different scales multi-skills and you cross scales so he actually crossed scales. He was modeling how cells would die. And then the organ will die at some point at some cutoff level and then the person dies. And he was basically modeling the propagation of the virus within the cells. And he can give me, because he was modeling this, he was actually able to give me an equation that looks like this, that for a certain age, for a certain time, you can find a probability. So a person at age 60, at day 20, has about 0.4% dying on that specific day, if they have COVID, of course. This is according to his model. At 90, it's about 1%. It's day 21, something like that. But remember, a person may die, there's a percentage of them dying each year, and you can see that there's a probability density function. This was a very good model to plug in, and it made my model a multi-scale ensemble, because I now incorporate models that will operate on different scales finally once you have the mortality model you can calculate how many people are infected and how many people died or you can think you can calculate then you come to the point that you compare those results with the actual results that you obtained from the COVID tracking project. And I have those results already. I'm not trying to predict the future or forecast the future. What I'm trying to do is I'm trying to make the results match all of those models that make sense and we can understand what they are. I want the best combination of them that will match the data. So this is where it becomes tricky because the data is dirty. We don't know how dirty it is sometimes, so we have to throw away assumptions or experts that will tell you how well it is and this is the observer model. The observer model models things as if the difference between what actually happened and what was observed what the observers saw so observer model number one the simplest one says for infections or mortality the numbers reported are correct meaning whatever cover copy tracking project told you about how many positive tests there were and how many deaths. This is the true numbers. This is what you should match to. However, Modelo number two, and most people think that those numbers are skewed. You can multiply the number of infections by five. You don't have to change the number of deaths because deaths are kind of harder to tweak. If someone died, then yeah, you know they died. But, you know, tests, if you test someone with COVID, you don't always test correctly, you don't test everyone. So, there's more variability. So, models number two and three says, say, multiply the number of positives you have by five or by 20 model number three to actually model the correct number model number four came from uh lucas botcher he he said that the number of deaths is something that has to be corrected and he has a publication about it he actually sent me a table of correction he also said multiply the infections by seven point fifteen and the number of deaths well it's very tired per state someday states were more accurate some you were less accurate he did the work I just took his words for granted model number five well this one is interesting this one i said you know what after we did one simulation we know what the numbers we got and we know numbers which you should have got let's create a correction that by average will correct it and we'll just plug it in just to make the numbers right this is kind of like cheating because i i already know what I'm supposed to be and I'm trying to fix my number. So apparently, if I'm correct and everything I'm saying is correct, the model number five should give me the actual, the best model because I already know what I should correct to correct it for. happens. Real life is actually much more interesting than this. Let's look at the results. So the results that you see here were computed, and by the way, I have many sets of them. I have more than 100 simulations by now, even more than 120, and I have 58 versions of the model. I've been working on this for about two years. Each simulation I do takes weeks, days, depends on what it is. This simulation took about three weeks to compute these days is how much it takes because i'm running heavy simulations uh think about it it's like three and a half years of computation on a single core but i'm using 64 cores to actually run this uh why because what i'm doing is i'm running a simulation of 100,000 individuals for each state. Think about it, there are 50 states, I'm running like 5 million people. And actually in the simulation I'm running 48 states. I won't go into details why, some of them don't have some missing data. So, but then you have to repeat the simulations like many times because each simulation is different just like in the world. If you repeat the same process, it doesn't, it's random, it will be different. So I do it, I do it more and more and more. And then I have to run it for all the models and I have to optimize it, meaning running multiple iterations. So all of this takes time. One iteration takes about two days for each simulation i'm running it's like five minutes so the five minutes go to like a huge number because i'm running so many of them i use python tools to do this i use dusk i use very good tools let's look at the results so the results uh look like this at the end. I gather everything in so I can show it's not wait. What happens? This is what the slide here does. If you look at this plot on the upper left, this represents what's going on each circle here you see there are many circles sometimes they overlap each circle here represents one time stamp on the simulation so you are started from april 1st 2020 and this is day 21 meaning this is april 22nd 2020 in new york this is new jersey let's look at New Jersey for a second. The height of the circle represents the error. I want all of those circles to be near zero. This means I have a perfect model. But tell me how to get there. It's far and it's random data. It will never be perfect. But we please want to improve it as much as possible to get insight on how the disease behaves so the let's look at the number called observed infected this is the number that i'm trying to match to the number after the slash the model is the number before the 323 is number before the 10th the 1076 is the number after this is what was 7 1076 is what was reported by the carbon tracking project out of 100 000 people for that day in uh in new jersey i'm trying to match it and you see it's way off at the beginning like 300 now it's a bad model number of observed deaths also it's like way off so this is why the circle is very high because the errors in both of them are high and what i'm trying to optimize or the error i call it the fitness score is kind of a combination of both of these it's almost addition but not exactly the equation is a little bit both of these it's almost addition but not exactly the equation is a little bit uh more complicated so what i'm trying to do is i'm trying to get all those circles to zero and how do i do it i change the mixture of the models at the beginning of the simulation all models have the same influence we know what the models are we know how the world should behave and or assume how we behave. And now we're at the beginning, we don't know all the experts, they told us things, we don't know which one's better or not. So what we do is say, okay, we believe you the same. And this is why you see all those blues, the blues are infection models, they have all the same influence, all about 20%. This means about 20 population behaves like one of those infectious scores like this 20 like this 20 and so on and so forth so then there are transmission models the response models the deaf models are the high one they are combined complicated this is why they are high. I won't go into details. And the observer models are pinkish in the end. What I do is I ask when I run the simulation for one, each one say, okay, what if you were more influential? What will happen? And then I run the simulation and run gradient descent algorithm. And it tells me, oh, you know, if this model was a little bit better, then the results would be like this. algorithm and it tells me oh you know if this model was a little bit better then the results will be like this and notice the circles have dropped so results became a little bit better and then a little bit better and now intuition then a little bit more better and you can see the average in the plot here or the bottom this shows you all of the components this is kind of like the average of whatever you see here the weighted average of what you see here considering the size of the state and several other factors sometimes and if you continue simulation you find around single iteration nine you get the best results and notice new jersey now now is much, much better. We're like 866,000 infections, not that bad. And the number of deaths is like, well, six off. It's much better than it was before. This is what the reference model does. Now let's look at all of those models again, and you can see the infectiousness model that had a long set covered is long and takes a long time to to infect a lot well it was very influential while models that are short infection not very influential this means probably covered is infectious for a long time this is the answer for the DHS the transmission model that is dominant the one about point something percent but if you combine them all the transmission is like two percent per person a parent counter not per person per encounter ah and then models two and five they seem to be not very influential, but also they're not very different than each other. This means temperature doesn't have a lot to do with COVID. And we tried, we tried to model C, but it might have some influence, but it's very mild. People apparently, the estimator of Apple mobility data seem to be pretty good because it actually, the models that says oh people behave like a couple mobility is very good only about 10 percent of the population uh ignore ignore it covered and some people but not that much uh followed the instructions or they gave by uh they gave in the uh they gave by this for each state but it seems that the and remember some of those models kind of like overlap in a sense so they will not eliminate each other very quickly but it seems that apple mobility data is a good estimator of how much people interact with each other. The Castiglione mortality model seemed to be very dominant. This is good news. This means we might want to have more models like this in the future, maybe more multi-scale models to compute the mortality better. And you remember the story I told you about the observer model that I added that should be perfect? This is model is model number five no it's the same as everyone else apparently observation observer models don't have much influence here and i cannot explain it fully now i've been to working on two years i have a hunch and the hunch is what we're modeling is something exponential in nature, almost exponential. The beginning, the spread of the disease. Well, those observer models are just correct by something linear. They multiply something. Maybe there's something wrong with them and maybe this is why they're like you're trying to. Model something exponential with something linear, maybe this is why I don't think pressure but maybe something else it may be even the bug and I found bugs in the what to use of working so and then I've eliminated many of them so whatever I'm telling you here you have to take with a grain of salt this is the best I could tell you about after two years of modeling it doesn't mean it's ground true because I'm still working on this and ask me in a year how well or how well it looks like um this is you remember infectious and vhs what was asking is by dhs apparently iteration number nine this is the best combination of the models that i created that probably explains best according to all the data and all the computations i put in the model so far this curve is actually the answer the ths would have gotten if i had those tools ready or in the future if there's a future pandemic and we want to figure out how infectious it is, this tool can be used by only gathering statistics and figuring out this curve from those elements. And at this point, I won't bore you too much. You weren't asking too much questions, but I want to conclude. So everything I told you has to be taking the grain of salt. I've been modeling this for two years and I've made mistakes in the past. Like I thought that the transmission was 0.6% in my first publication. Now I found it's more like 2%. Talk to him in a few months, I might change what I'm saying, but these results that you're seeing has been stable for a few months now. So I'm probably onto it, but I still have to make sure that the numbers match not only April, they match May, they match June, and so on and so forth. This means more simulation, it means a lot more computing power, a lot more work, then we'll know more. So the more information you have, the better it is. But I'm trying to figure out what would the government or someone wants to know about the pandemic will know after about a month of data. And if we know that and it is helpful, then the next pandemic will have a tool to help. There is some information about what I use, the Python tool I use and some some reproducibility information. I have to acknowledge many people who helped by providing help, giving me models, giving funding for also institutions, giving me funding for computing power. There are so many of them. And this work, maybe I'm signed on it, but many people did contribute and did help this make happen. Even by hosting my server, it takes a lot of electricity. Even that was helpful. And since I'm a sole proprietor, I have to add a conflict of interest statement. So you'll know where the money came from for this. So I've been funding this project most of the time i do have two two uh two patents on it uh so anyone is interested in using this technology in the future uh and there in the us they better talk to me and uh i would love to have this technology used by someone big entities smaller uh i would like more people so i'll conclude at this point and open up for questions if you have any um yes i i was thinking on with covet 19 we have have like many many um i don't know the name like variants i mean we have omicron and so how that impact on this model so this model is talking about april 2020 even if there are variants it wasn't it wasn't dominant or there was basically one covered at that point also remember i i'm not modeling reinfections because there were not that many people infected. So it's a simple case. The idea is that in the future, I can complicate this case more and more and more and more. And with more and more computing power, I will be able to eventually explain the disease. And here are some caveats. Remember, the data I have is dirty. The assumptions are not always correct. There are many things that I may not be able to do. I'll actually show you one of them. This is a result set that I got a few days ago, I think a week ago, something like this. This is examining the same April, but here I'm looking at the different fitness score. Now I'm almost, I'm saying almost because it's not entirely correct, I'm almost ignoring infection data. I'm only looking at death data. And I'm trying to optimize just to match almost only death data with very slight influence of infection. And when I do this, you can see the system almost doesn't optimize anything. And the results are slightly different. So does it indicate that the death data isn't correct? Remember, I'm talking April, so all of those numbers are small. There was not enough to accumulate by at that point. So there isn't a lot to model in a sense. Maybe this is something that also have to be taking into account but i'm trying and i'm showing you that well it doesn't really converge it's kind of like fluctuates on the same thing it does some changes here and you see the models are slightly different than what we had before but but this simulation doesn't seem to improve anything so the now get to really improve maybe it's because i didn't have enough computing power maybe there's something else blocking it in the past i had a bug that would block this by the way i removed that bug and now it's better but like but now it's not converging so why is it not converging well maybe it's because the numbers are too small and at that point in time when i try to model it with the amount of data that it's it's insufficient maybe we'll see in the future i'll try when i add more and more and more data little by little will become better and better and better so but do i answer your question yes yes thank you your question yes yes thank you any more questions uh well also about the fitness um yes for some cities like New York and LA yeah so here New York let's take this point so the fitness score here it became better this is 54 but it should be twice so it's like way off the model at the beginning the assumptions i added the beginning to try to explain what's going on didn't didn't add up correctly the model was terrible but later when i improved it by optimization got to the point that's like an iteration nine i think it's the best results it got it almost point on compared to what it started but 1400 a little bit overestimating out of but it should be 1300 and the number of deaths is uh 90 again a little bit overestimating should be 78. so it started underestimating and overestimating, but if we go and look at one more iteration, you see it goes up again. You see the model at this point starts fluctuating. It says that there's a conflict between New Jersey and New York and some of the others. I will show you. You can see when one goes up, the other one goes up. So it's trying to fit in all of those models to exactly fit it, but all of those changes are relatively small compared to what we saw in the beginning. If I had much more computing power, then maybe we can run it for much more to be much more accurate. But remember, all of those simulations are random and we're modeling a random occurrence. a random and we're modeling a random occurrence. What we saw, it's like if the pandemic will hit again, it will may not be the same thing as we saw before. We saw during the pandemic, different states starting behaving differently all around the world. So I'm trying to find some sort of average in something that's fluctuates all the time. So putting more computing power, I'm not sure I'm even modeling something which is accurate, like the numbers I'm getting are dual representative phenomena. Maybe it's some sort of error that I'm trying to model at this point. So I'm not sure. I usually stop after a few iterations. But even in iteration four, you can see, you already see, this is where things are stabilizing more or less. Duration four is almost the same as nine. You can see New York and New Jersey are much better than they were before. Here you can see New York is overestimating again, but it's much better before. Do I make sense? Yes. I was also thinking of maybe this is actually a problem with the source probably, I mean how they were reporting the cases because as far as I remember the cities were the most impacted by COVID so maybe they didn't record properly the cases. This is where the observer models should kick in. You remember I told you the observer should correct between the number that is actual and what is reported. The observer model should do that. But the observer models don't budge. You see, it's like they don't, as if they don't influence anything. I'm still, I'm still baffled about it. This is currently what I have I have talk to me in a few months maybe I'll figure out why but like currently my hunches that it even if you give it here I had models though that you have to move to the number of infections by 20 it is the same as influences the model that says no the numbers are correct so something there is off I don't know why like it doesn't influence anything maybe because the things that really influence the pandemic is the transmission. This is the models, the infectiousness, like how infectious it is. They have much more influence and we just didn't get there. I'm not sure I'm flying hunches at this point, but you'll baffle as much as I am. But this is the best I could explain the pandemic after about two years of work. So with the best tools, I think they are the best, but that's me saying. Okay, thank you. Thank you. Thank you. Thank you. If there are no more questions, maybe we should adjourn. Yeah, I think that sounds about right. Just have a little thing to read at the end. Okay. Okay, well, thank you for coming tonight and we'll have another meeting coming up. I believe in this. Oh yeah, no, next year. This is the last. This is the last meeting of the year. Yeah, well, thank you for coming and thank you for giving such a interesting talk um yeah it was really nice meeting you yeah the same here um it was a pleasure thank you for the nice audience good questions okay have a good one bye-bye you upload this video, make sure to edit the part in the middle. Yes. Just take it out. Okay. Bye-bye. Bye-bye.
PyData Chicago: The Reference Model for COVID-19 attempts to explain USA data, by Dr. Jacob Barhak
2,736
PyData
20221228
For more details about the talk and the speaker, pleaser refer to https://www.meetup.com/pydatachi/events/289899473/ www.pydata.org PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R. PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases. Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/numfocus/YouTubeVideoTimestamps
2024-06-19T10:55:06.478744
https://www.youtube.com/live/wYZJq8CNmTw
Hey Wiz, what do you think are the best adventure films out there? That's a tough question to answer, Greg. Yeah, yeah, yeah. What if I gave you some data? What if I said, well, between Lord of the Rings, The Hobbit, or Dune, maybe Harry Potter? How would you pick the best? it or Dune, maybe Harry Potter? How would you pick the best? Well, I'd probably think about maybe the ratings and, you know, who is in it and, you know, what happens in the movies, like how well I enjoy them. You know, it's still a tough question, though. So you do like some reasoning to try to figure out, well, maybe there's some sort of critic scores. Maybe those are quantitative. Maybe you might think about the plot and kind of the the details of the story, what people thought about that. There's essentially a lot of data you might try to consume to really answer this question best. Isn't that right? That's absolutely true. Yeah. Yeah. Yeah. So you kind of have to reason through different types of data, almost doing what we might call agentic behavior. Isn't that so? That's right, Greg. Yeah. Yeah. Yeah. Yeah. Well, I think you think we can build an AI to answer this question of what is the best of the best today? I think we can definitely try. Yeah. Yeah. All right. Well, that's what we're going to do. We're going to see you back when we're ready to start building. Welcome everybody today to Data Agents with Llama Index. My name's Greg, and that was Chris, aka Dr. Greg in the Wiz. We're from AI Makerspace. Thanks for taking the time to join us today. Today we're going to try to build an intelligent AI system that uses agentic reasoning behavior to discern. Well, what is the best couple of movies out there? You've all maybe recently seen the Dune series, and we're curious to see how it stacks up against some of the classics. And we've got the perfect tool in store today to be able to look at this. and we're curious to see how it stacks up against some of the classics. And we've got the perfect tool in store today to be able to look at this. We're going to use data agents with Lama Index. By the end of today, you'll understand the Lama Index core constructs of the framework and how they come together into data agents. We're going to do a deep dive on the technical architecture. We're not going to get too high level pie in the sky with the concepts today. We're going to spend a solid hour in Llama Index building this thing out. We hope you enjoy it. Let's get into it. If you have questions along the way, please drop them in the Slido link or directly in the YouTube live chat, and we will do our best to get to all of them today. All right. So as we align our AIM for the day, we really want to understand these core constructs. That's what we're going to build with. That's what we're going to build on. Then we're going to go ahead and build a complex RAG application that uses agentic reasoning. We're going to build indexes for both qualitative and quantitative data, meaning movie reviews and context about those movies. We're going to leverage metadata filtering to be able to move between these two indexes, and that is going to be the agentic reasoning piece of our application. And hopefully we can get an answer. What do you think the best adventure movies are out there? Maybe you can throw it in the chat. So today we're going to dive into the core constructs and then articulate how to build out the semantic and SQL-based separate pipelines that we can combine and navigate using metadata. All right, so LAM index, just super high level. What are we talking about? This is the data framework for LLM applications that benefit from context augmentation. This is the keyword they are using today. Now, don't get confused. Context augmentation is nothing more than retrieval augmented generation or augmenting what's in the context window. That's called in-context learning. Now, when we do fact checking, when we try to make sure we're not hallucinating, it's all about reference material. That's sure we're not hallucinating, it's all about reference material, that's what we're augmenting with here. So, you know, again, what's the best? We're going to need some fact checkable information to back up our response by the time we're done with this thing, right? And so we want to get good answers that are rooted in reality, rooted in facts. Overall, the RAG process is to ask a question, feed that question into an embedding model to get a space that is to get a representation in embedding space of our question. Then we're going to look for similar information within our vector store. We're going to set up a prompt template that's ready to receive reference material. And then we're going to inject that reference material back in natural language before we put everything into the LLM. Now, this process is the retrieval process. And this process really does lead the way because retrieval done well makes generation go better. It improves our ability to leverage context. And again, this RAG process is mostly completed at this point, but we haven't done the G, we, this is the next step towards production ready data, context, augmented LLM applications, production ready. And so this is very cool because they're doing a lot of cool things. But one of the things that they did is they said, well, the main abstractions are going to be in core. That's really what we want to focus on today. And then there are a lot of other things that are going on, a lot of things you can interface with. We'll see a few of those today. But we want to, again, focus on the core so we can really build up this idea of data agents. Now, when they released Lama Index v0.10, they sort of showed this image, and I really like it. We're going to focus on the core package today, but we're also going to see that we're leveraging a number of other pieces. We're leveraging the agents, we're leveraging LLMs and embeddings. Of course, Lama Index doesn't have LLMs or embeddings in its core because it doesn't create LLM and embedding models. It's a framework that allows us to leverage those LLMs and embedding models. We can also leverage indices from other companies. For example, we'll leverage Quadrant today. And VectorStore is a particular type of indice. That is the type we're going to use today, is the Vector Store. And then, of course, we can have access to tools. We'll see one tool that we'll use today to help us actually build that metadata filtering capability, namely the OpenAI functions API, the function calling API. But before all that, let's focus on the core of Lama Index. Let's focus on the core constructs. Let's walk through this step by step. The core constructs are associated with steps of the process, the same process we just diagrammed out. We need to load our data. we need to index all of the data, we need to store it, and then we can ask questions against it. The structures within LLAMA Index Core we need to be able to leverage our nodes, embeddings, indices, retrievers, query engines, and data agents. Let's walk through step by step. First, we can think about loading. And we're going to load with nodes. Loading with some noting. And loading is nothing crazy it's just ingesting your data but the actual name for the chunks that we're going to use in llama index are nodes so these are some nodes pictured within a database slow vector store here database. It's a little vector store here. So nodes are the way that data is ingested. It's the how of data ingestion. So nodes are very, very important in Lama Index and everything builds on nodes. They're nothing crazy. It's just a chunk, but it does have metadata associated with it. have metadata associated with it and metadata allows us to be more intelligent down the line because it's data about our data it's up a level of abstraction we can use that later as we layer in more and more complexity the way we get nodes is we parse up our data. We use generally a node parser. This does the chunking of our documents. Okay, not very complex here. Once we get data loaded in, we're ready to do some indexing and we need to leverage embeddings to do this. Now, the process of indexing is simply the idea of structuring our data so that down the line, we're going to be able to easily do retrieval. Okay? Remember, retrieval augmented generation. So indexing is very key because the better we structure, the more easy it is to retrieve. And the way this looks in Lama Index is we, of course, split our documents into chunks, nodes with our node parser, and then we're going to create embeddings for each chunk. Now, each chunk of document is a node. Now, that node might have metadata associated with it as well. Often it does. We're going to pass our chunks through an embedding model to get a representation of that language in embedding space. And all of this is going to allow us to get essentially a list of nodes. This list of nodes is sort of where our indexing process is completed. Well, what's next after we've done the indexing? Well, I mean, we don't want to do the indexing again every single time we have to build something. So after indexing, we want to go and we want to store our indexed nodes within an index. Indices just being the plural of index. Now, again, storing is just about avoiding re-indexing. We could run the entire thing every time, but that would be dumb because we would just be wasting compute to get representations we already had. So this use of embeddings that actually allows us to store our list of nodes goes directly into the vector store. So we take our list of nodes and this vector store is a specific type of index. It's the most common type. It's the type you're probably going to be building with, but there are other types of indices. We don't really need to worry about those today because we're going to use vector stores. But that's really all there is to it. Storing is putting stuff in a database. Once we have that stuff in a database, what's next? Well, we want to be able to wrap our vector store or our database in a way to get stuff out of it, in a way to actually do the retrieval. This is where we need a retriever to allow us to do our querying. And of course, this is just asking questions, but we're going to ask questions and get answers based on the LLMs and data structures that we are using through this process. So the way we did the embeddings, the way we set up the data structure, the way we decided on metadata, all of this matters when it comes to what is retrieved. And so the retriever is simply fetching stuff out of the vector database. It's getting nodes. And the nodes are then often brought back and synthesized using these synthesis modules in Lama Index. Now, it's important to understand, and we've talked about this quite a bit recently over the past few weeks, that when we think about this querying process and we think about retrieval, this is what actually gives us the context augmentation. And so both the chunking, AKA the creation of nodes and the retrieval, AKA the retriever affect this context augmentation. Because we're just wrapping the vector database in a way to find stuff that's similar to our question. Or in the case of a more complex system like we have today, route to the proper database to find stuff that's relevant to our question. And so when we think about kind of the way this is split up, we can think about there is the chunking and then there's the retrieval. And we want to optimize both of these. And it's hard to kind of find perfect ways to visualize this, but we talked last week, if you joined us for advanced retrieval, about the fine line between chunking and retrieval existing at about the vector store. This is where we're going to sort of dive in today. For more on that, check out our advanced retrieval video from last week. But sort of going back to where we started here we're just talking about this process now today we're going to do something a little bit more complex but let's stack this in terms of llama index core constructs let's return well after we've done loading indexing storingying, we're not going to be doing evaluation today. It's time to talk about the query engines. And the query engines are simply generic interfaces that allow you to ask questions over the data. They might be built on many indexes, but they're built on at least one. The index and retriever form the basis for our query engine. It's hard to visualize query engines, to be honest with you. I haven't found a great way to visualize it, but here's another way that we could visualize it, taken directly from a blog from Jerry, CEO of Lama Index, from last year. The query engine really is kind of the heart of what's going on because the query engine as a unit, as a sort of pattern that we can leverage, allows us to build very, very interesting, much more complex applications. Well, that's a good lead in to the idea that we've been trying to get to all along of data agents. And data agents take this idea of a query engine and expand it out. So data agents, I like the Lama Index sort of nomenclature here, they call them LLM powered knowledge workers. They're doing this automated reasoning or perhaps semi-automated reasoning and decision making to decide, okay, I'm going to do some reasoning to decide, do I go hit an API that I have access to, maybe a Gmail API, a Slack API, a SERP API for Google. Maybe I'm hitting DuckDuckGo. Maybe I'm hitting something else. Or maybe I'll just go ahead and tie a couple of query engines together. And now you see the query engine is contained all within a single block here. But there's a lot that goes into a query engine. In fact, there's potentially an entire rag application behind that query engine. And so what we're going to do today is we're going to sort of expand this out. And we're going to use what's called the OpenAI function agent in Lama Index. And you see straight from their docs, agents are, quote, a step beyond our query engines a layer of abstraction up from the query engine and it allows us to do reasoning it allows us access to tools to api's and so this is built on the open AI function API which allows us to sort of intelligently call functions which is great. It's fine tuned for that. And what we're going to do is we're going to leverage the OpenAI Functions API to build this OpenAI Function agent to allow us to look at two different query engines that we build up and we're going to decide which to use based on the metadata associated with those in short we're going to build a functional tool that helps us decide on which query engines we're going to use and And remember, query engines are built on one or more indexes via retrievers. That's why we call this the auto retriever functional tool. It's going to allow us to look at our question, look at the metadata associated with our query engines, select not only the right type of data, but exactly which aspect of that data, and then dig in and try to return the best possible context. So I'm beginning a little abstract. Let's sort of bring it down a little bit closer to earth here. So today's combined build in short is going to be up a layer of abstraction. We're going to take an input. We're going to go to a data agent. It's going to decide, should I use the semantic query engine or should I use the SQL based query engine? Should I use the quantitative data, the SQL based data, or should I use the qualitative data? And we're going to build these indices and retrievers out with different types of data. So our semantic rag pipeline is going to be based on Wikipedia data. We're going to take Wikipedia data on Dune films 1 and 2, Lord of the Rings films 1 and 2, Hobbit films 1 and 2, and Harry Potter films 1 and 2. one and two, Hobbit films, one and two, and Harry Potter films, one and two. Then we're going to create an index. We're going to prep the Wikipedia data. We're going to build a vector store index. We're going to build a vector index retriever and a retriever query engine. Look at that. All the keywords all at once. Then we're going to go and we're going to get some quantitative data. We're going to do this using IMDB ratings for Dune 1 and 2, Lord of the Rings 1 and 2, Hobbit 1 and 2, Harry Potter 1 and 2. And we're going to build out that index. This is pretty cool the way this works because we're going to download the data. It's going to be in CSV format. We're going to create data frames just using pandas. And then we're going to use the SQL database index that's built into LOM index. And then the very powerful and very cool natural language to SQL table query engine, where we can ask questions in natural language and query our SQL data directly. We're going to use OpenAI models to get it done today, including the new embedding model from OpenAI. And that's really all there is to it. So again, let us know. What do you think the best movies are? One and two, because it's not fair to all of them. Only doing one and two. Come out. Let's see what the AI thinks as we head over to Wiz for some agentic rag with Lama Index and data agents. Wiz, over to you, man. Yes. Okay. Hello. Thank you. So let's look at the notebook. Basically, we've got exactly what Greg said. The idea here is that we have this idea of, you know, these data agents, so many different agents, right, that we want to work together in some kind of, you know, in some kind of like way that makes sense. So what are we going to use? Well, first of all, we're going to use a few new terms that you might not have been so familiar with. And so Greg's already done a great job explaining those. They're explained a little bit more in the notebook, but that's fine. So first things first, we're going to need to do, this is a lot of images and a lot of stuff going on here, obviously. So we're not going to, you know, it's a lot simpler in the code than we're seeing it here, right? So let's first move to the main boilerplate. This is just necessary. We just have to do it. There's no way around that. You know, there's not really, unfortunately, there's nothing we can do about that. The next thing we need to do is get our libraries. Now, you'll notice we're getting Lama Index and OpenAI. So Lama Index makes sense. OpenAI makes sense. We're also going to grab Lama Index readers in Wikipedia. We're also going to use the QDRIM vector store today. So the QDRIM vector store is basically a really fantastic VectorStore that's going to help us to understand exactly what we're storing everything in. When it comes to VectorStores, and this is an important point, right? The actual VectorStore that we're using is pretty, it's pretty important, basically. And so the reason that is, is that when we think about storing all this information and thinking about like, you know, oh, well, we want, you know, we want some, you know, interesting information or data or something like that, right? Well, we want to be able to say, can we access that metadata cleanly, right? And so the way that we do that is we can use Qdrant. It's going to basically handle all of that for us. Very convenient. Obviously, it's a, you know, it's a great way to do it. So there you go. We're also going to use SQL alchemy and pandas. Now, SQL alchemy is going to be the thing that we basically just, we want to be able to, you know, have this in-memory vector store. That's pretty important, right? So the way that we're going to get that is by using SQL alchemy and pandas is going to help us to transition from our CSV to our actual SQL database. We also have this optional piece, which is optional. So you don't have to use it if you don't want to. You don't have to think about it if you don't want to. But we do have an optional piece where this is all going to be available to us through the, you know, through the actual 1B integration with Lama Index. So we can see all of our calls that we're making. We can keep a history of all the calls that we're making, which ones are failing, which ones are succeeding. But this is an optional piece that we can ignore if you would like to. So next up, we have our environment variables, and we can also set our 1B callback if we're going to use that. And then next we have our settings. So if you remember kind of, you know, these previous you know, these previous versions that we've used, right? So this is to say, say we're thinking about the idea of, you know, what LLM should we use if we don't specify an LLM, right? Well, the idea is that we actually want to, we would prefer to use that version, you know, of the model when we don't specify a default. So you can think of these as kind of like defaults, basically. So we've got a set up Lama index. We've indicated what things we want to use as our defaults. Now we have to create an actual index. Creating an actual index is not a difficult part. Basically, it is the way we can think about this is as follows. So we have this idea of vectors and vector stores and all this other stuff. vectors and vector stores and all this other stuff. And what we want to do is we want to make sure that we have this set up in a way such that we have the index created cleanly, right? So the way we're going to do this is we're going to use the LUM index core to do that. Now, before we can actually set up an index, we need some data. The data that we're going to use today, of course, is as described. This is just a bunch of different movies. These are the disambiguated Wikipedia titles that the Wikipedia tool succeeds. The big idea here is basically just fetch a lot of Wikipedia articles about these movies. Okay, first of all, that's done. Second of all, right, we need to set this up cleanly. We need to set it up so it works well. For this example, we're going to use the in-memory QGIT client. That in-memory client is very powerful, but it does, you know, come with the fact that it's running all in memory all the time. And it's in this session. So, you know, for production environment, of course, we want to set up a separate QGent client. But for this particular instance today, just so it runs nicely in the Colab, we're going to use the memory QGent client. The next section is just setting a collection. So a collection kind of is like metadata on top of metadata. So not only do we have metadata, but we also have collections. So we can store different kinds of information in different collections, and we can point those collections to different vector stories. Now, we're not going to do that today because we're already doing quite a bit, but it is a thing we can do and it is pretty dope. You'll notice that when we create our collection, we have to be manually setting our vector parameters. This is because Qt is going to expect that all of our vectors have this size. This size is based on the default embedding model that we chose, which if you remember, we set up here in settings, and that's text embedding three small. And so it just happens to have a dimension of 1536. Now that we have our collection and our client, we can create our QGRT vector store. Now our vector store is going to be powered by QGRT, right? So we talked about the benefits of that. Comes with all this metadata stuff out of the box, works really well, scales really well. You'll love to see it. We're then going to create our storage context. Our storage context is what it sounds like. It's context relating to our storage. I know I just said it backwards. But the idea is that we need to have some kind of way to tell Lama Index how it should interact with this vector store. Right? So, when we create our vector store index, we have a really detailed view of this vector store, right? So, that's what that storage content context does. It acts as a nice layer between the vector store index and the thing that's powering our vector store index. We're going to initialize this as empty, you might notice. So there's no docs here, which is, you know, not useful, right? We know we have a lot of docs, but there's none here. And we're going to go through the process of adding nodes manually, because we'd love to add this additional metadata tile, right? So the idea is we have some ingestion pipeline that's going to convert our documents into nodes, and then we're going to insert those nodes into our vector store. And the reason we're doing this manually is because we want to add some metadata. Now, of course, you can add whatever metadata you want, doesn't matter, but you have to follow this pattern if you want to add that additional manual metadata. Now, of course, you can add whatever metadata you want, doesn't matter. But you have to follow this pattern if you want to add that additional manual metadata. You'll also notice that our adjacent pipeline is going to convert our actual, you know, documents, as you can see here, into nodes, which is useful, because we need to convert them into smaller pieces, so that they're more compatible with our RAG pipeline. Okay. So now that we've inserted a bunch of nodes, and you'll see, just to be clear, right, we take each movie and its corresponding, this is the title from the movie, right, in the wiki doc, and we ingest it, we turn it into some nodes, and then for each node, we add this metadata, which is related to the title of the movie. And so now we have an index that has a bunch of nodes and the nodes are associated with their title. So now we can create a simple query engine, right? So we can see our simple rag is just our index.as query engine couldn't be easier. You might think, well, what LLM are we using? We didn't tell it what LLM to use and you're right, but we did. Remember, in our settings, we specified this default LLM, and that default LLM is going to be OpenAI. Pretty cool. All right. So now that we have that, let's check out what the actual prompt is. The prompt is pretty simple. Context information is below. Here's some context. Given the context information, not prior knowledge, answer the query, query, and then it's expecting some answer. It also has a refine prompt that can be useful if we want to leverage it. We won't be specifically leveraging it today, however. So we can ask questions like, who is the evil wizard in the story? And we get Lord Voldemort. Now, already we can kind of tell what a potential issue is right if we're dealing with like uh six different movies that all have evil wizards this is not necessarily the response that we were hoping for or looking for right uh now we could also ask questions like who are the giant beam beings that roam across the world and it can say uh you know stone giants are the beings that roam across the world but again i mean like dune has sand worms we've got ants you know so we're kind of missing this specific information based on the movie that we're providing which is uh you know not necessarily the most useful thing in the world so let's make that better by using the auto retriever function uh functional tool uh got an image here that you see using the auto retriever functional tool. Got an image here that you see in the slides. Basically all this is saying is we're going to do some filtering before we return our list of potential contacts. And this filtering is going to be based on our LLM's decision. So when we go to our actual implementation here, you'll see that we have this vector store info. This is a collection of metadata, which is going to include the title of all of our movies, right? to our actual implementation here, you'll see that we have this vector store info. This is a collection of metadata, which is gonna include the title of all of our movies. And it's basically just going to say, hey, based on the metadata, you're gonna want to have some awareness of what you can choose as part of the metadata. So if we tell it like, hey, you can choose as part of the metadata, right? So if we tell it like, hey, you can choose the title of a movie, but we don't tell it what movies that we have, that's not a very useful tool. So we have to, again, whenever we're doing AI engineer type of work, we're building RAG applications, we're working with Lama Index, we wanna be sure that we're thinking about how is the LLM gonna interpret this information, right? How is it gonna leverage this information to give us a sweet answer? And so in this case, you can see all we're gonna do is let it know the titles of the movies we have. And then in this next piece, I'm not gonna spend too much time on this, but the idea is that this is just describing the function that we want to be able to call that's going to filter our metadata, right? So we have here some query, we have some filter key list and some filter value list. And then we're going to build this auto retrieve function. Now this is a function that lives on our side. And basically all it does is it's going to do retrieval based on a set of filters. You'll see it takes query, filter key list and filter value list. That might seem familiar. We just built that above in Pydanta. It's going to build a set of exact match filters based on the key and values we've provided. In this case, we're keeping it simple. Title is our only metadata field. So if it needs to filter, it's going to use title. And then we're going to use our vector index retriever, similar to what we did before. We're going to set up a query engine about it using a retriever query, or sorry, our retriever query engine. And then we're going to go ahead and query it with our query. And that query is going to take into account, right, this actual metadata filter. So we'll see an example of how this works in just a second. But the idea is all that's left, right, is that we want this to be in some form agentic. And so what we're going to do is we're going to describe our tool and when we should use it, use this tool to look up non-review based information about films. And then we're going to have our auto retrieve tool take our function that we've created, right? And that function that we've created again is this function. And then it's going to describe the tool name. This should be descriptive, right? Because the LLM is going to use this information to determine when to use this tool. We're going to provide our description, which I mean, it makes sense, but it should be descriptive. And then we're going to use our auto retrieve model, which is this bit here. Now, the reason this is important, this auto retrieve model from Pydantic is because we need open AI in this this case to be able to call our function reliably. And this Pydantic model helps us describe in great detail to OpenAI how we call the function, right? You'll see we have query. It should be a string. Here's the description of how that string should look. We have our filter key list. It should be a list of strings. And here's what it should be, right? So this is the idea. We have to describe to OpenAI exactly how to call this function. And that's what we've done using this auto-retrieve model that we built in Pydanta. Very cool. Okay. All that's left to do now is let her rip, right? So we're going to create an OpenAI agent from tools. We're going to pass in this one tool. We're going to have verbose equal true, just so we can see the outputs. And then we're going to say something like who started in the 2001 film, right? Now check this out. When we say who started in the 2001 film, we get calling function, semantic film info with args, query, cast of Dune 2001 film, filter key list, title, filter value list, Dune 2001 film, right? So we're gonna actually filter out every other movie in our vector database. We're only going to care about just Dune 2001 and not at a like implicit level where we're relying on the similarity to only be closely related to what, you know, not like that, but in like a literal sense, we are only going to query documents related to what, you know, not like that, but in like a literal sense, we are only going to query documents related to Dune 2001. And then we get the answer of the people who starred in the movie. Let's go. And then we can say, who are those giant guys from Lord of the Rings that roam around the forest, right? So in this case, again, we're going to use that semantic film info tool. We're going to say characters from lord of the rings the two towers we're going to filter it on the title and the lord of the rings two towers and then we're going to get a bunch of dope characters right one of the characters that we're going to get is uh is the uh tree beard right and so the giant guys from lord of the rings the two towers are likely the ents also known as tree beard Treebeard and the Ents of Fangorn Forest. So our query is kind of bad, right? Like, this is not a good query. But it doesn't matter because because of the filtering, we're able to not just rely on the on our context potentially containing Lord of the Rings or Lord of the Rings adjacent material. We can guarantee it does by using that metadata filtering. Very cool. So next, we're going to do the same thing, but instead of just filtering on metadata and then relying on vector search, we're actually going to do some SQL queries, right? So we're going to import all of our movie review tables as CSVs, and then we're going to load them as data frames, and then we're going to push those into our in-memory SQLite database. Again, if this was a production environment, we would not want this to be an in-memory SQLite database. This would just be some database that we have, right? But you can see we can have a ton of different tables. Like even this toy example here, we have like, you know, a lot of different tables in our database. We're going to see how well it works given that. So now that we've created a database with tons of tables and those tables are all based on those review DFs, right? And the review DF here basically being just this exact, you know, CSV file but represented as a pandas DF and then converted into a SQLite table, we have the ability to load that SQL database as a kind of index, right? And so what this means is that we're going to give it an engine. The engine in this case is the thing that points to our SQLite database. And then we have our list of movies that it's going to care about. So say we had more tables, right? Say we had like movies that were a hundred fold more movies. We had every movie, let's say, right? Well, we could have come up with a list based on just what are adventure movies in this case, and only use those specific tables in this particular index. So again, we have the ability to even above a level still think about metadata. In this case, of course, we just have the movies that we do. And so we're going to use every table easy peasy. Now we're going to create a query engine and it's going to be called the NL SQL table query engine. It might make sense. What this does is it takes a NL or natural language query and it converts it to SQL and runs SQL on that engine that we provided. And then it returns us a result. Very cool. We can wrap that all up in a query engine tool where we describe exactly what's happening. We can indicate what movies we have in this database or what tables we have in this database to be more generic. And then we can say, you know, what they contain info about. And we can also specify when this tool should be used. We can set this up. And then we can do things like what is the average rating of the second Harry Potter movie and get a response like Harry Potter, the Chamber of Secrets is the average rating was about 7.54 out of 10. And then we can ask questions like which movie has better reviews, Lord of the Rings or Dune. And we can get responses that Lord of the Rings has about 9.87 out of 10. And that Dune has 8.34 out of 10. So, you know, Lord of the Rings has better reviews compared to the Dune series. So we're kind of on the way to answering that question Greg had, right? Which is the best adventure movie. So what if we wanted to use both? Well, if we want to use both, all we have to do is use the combined pipeline. You'll notice that we still have our OpenAI agent. All we do is we add an auto-retrieve tool, an SQL tool, and then we can ask questions like, what movie is about a chamber? And what is the average rating of the movie? Right? So in this case, we're going to see a lot of tool usage. We're going to see the semantic film with args. We're going to go ahead and it's going to filter by all of our movies in this case, because this is a potential, you know, it could be any of these movies. And so then it's going to give us the output that Dune sometimes is in, you know, has things about chambers. Okay. It's also going to say, hey, sequel query, you know, take the average rating from Harry Potter, the Chamber of Secrets film, right? And then we're going to finally say to our semantic film info, to think about just the Chamber of Secrets film. And we're going to finally get a response of Harry Potter Chamber of Secrets is about a chamber. The average rating of this movie is 7.54 out of 10 or 7.2 out of 10. So the idea is, again, that we have this ability to make multiple calls. We can also see it working with two different semantic query calls. What worlds do Lord of the Rings and Dune movies take place in? We get the response, Lord of the Rings movies take place in the world of Middle Earth. On the other hand, the Dune movies feature various worlds, including Caladan, Arrakis, Gator Prime, and Arakan. So Arakan, sorry. So very cool, very cool. The idea is we can use multiple tools to figure this out. Finally, we ask the all-important question. Which of the following movie series is considered the best? Harry Potter, Dune, Lord of the Rings, or The Hobbit? Based on your answer on both reviews and non-review information, we can see that we get the final answer of, among the movie series mentioned, Harry Potter is considered one of the best movie series. Additionally, Dune is also considered one of the best movie series, okay? So we kind of get a tie, I suppose. In the LLM's interpretation, sorry to disappoint you, Richard, who said that you will not trust Rag if it pipped anything else than Lord of the Rings. Harry Potter, dude, they're tied for the best movie um in its impression based on the data that it has access to so that's the uh that's the idea that's what we're doing with with all of this uh it it's basically just using the lm right to decide when to use tools to decide which tool is most relevant and use that information to, you know, to make sure that we're using the right tool at the right time. That's that data agent, right? We have many sources of data. We need something that can be smart about picking when it uses what. And, you know, thanks so much for watching the code demo. Now, I'm going to pass it back to Greg. Before I do, Thanks so much for watching the code demo. Now I'm going to pass it back to Greg. Before I do, got to tell you to like, subscribe and ring the bell. YouTube does help. I know it's kind of corny to say that, but we're here every Wednesday talking about cool stuff. So thanks so much. And back to Greg. Yeah. Awesome. I got one thing I just want to chat with you about for a second, just to make sure it's clear to the audience here. So we built up this data agent. And the thing that we did is we said, well, this is the auto retriever functional tool, right? And we said, this thing performs auto retrieval from a vector database, and then applies a set of filters. And you actually did this. On just the semantic query engine. And then you also did it on the semantic and sequel based query engine, right? So that sort of agentic behavior can happen in all sorts of ways, and we demonstrated a few today. Is that right, chris that's correct yeah yeah so there's there's many layers here when you sort of put this reasoning engine at the front you know even within a single semantic query engine we saw well some of the movies have similar monsters some of the movies have similar you know caverns and caves some of the movies you know adventure movies have these similarities in common so you really have to look at your data and decide on the right metadata decide on the way you're actually assigning the prompt to the agent but also actually doing the prompting in your own query, right? Like it all comes together into better answers, you know? Absolutely right. Absolutely right. Okay. All right. All right. Well, let's wrap up and then we'll jump into questions here in just a moment. So everybody that concludes our event for the day. We've got a number of great questions coming in the chat. We saw the core constructs of Lama Index V 0.10 from nodes to embeddings, to indices, to retrievers, to query engines, to data agents. And we saw that data agents really are just that next layer of abstraction above the query engine, allows us to leverage multiple query engines also gives us access to tools one of those tools could be the open AI functions API so that's kind of an interesting thing as well so there's a lot more good stuff when we talk about metadata filtering reasoning during retrieval processes and agentic systems that we look forward to bringing you guys on YouTube live as we continue into 2024, really the year of agents. So if you got questions, please go ahead and throw them in Slido or throw them in YouTube. We will triage by taking Slido questions first. So please upvote those. And let's get Wiz back on the stage. Let's go. We've got a bunch of good stuff coming in. Some of the classics as well, people are asking. But the first one that I saw a while ago on Slido is, does the embedding apply to the text chunk in the node and the metadata or just the text chunk for the default case just the text chunk um you can definitely set it up in a way where you combine the metadata uh or you you also embed metadata i mean mean, of course, the sky's the limit all the time, but by default, it's the text inside the node. Okay, okay. Next question. Another anonymous one here. How do the different response modes of the query engine affect the agents afterwards? Any recommendation? Yes. So assuming I understand the question correctly, how it can respond can affect the agents afterwards. So depending on what information it responds with, how much information you're basically sharing in that message thread can absolutely impact the performance of the agents afterwards. When it comes to like the best, it's just going to be a use case based, but I would say like sharing any and all information that is relevant to help your agent is the best way to go. All right. Next question. Another technical one here. Do we need the storage context with the ingestion pipeline? The example of ingestion pipeline and LOM index website does not use it. Yeah. So basically what we're going to do is we're going to, we're actually going to create the index first. And so that's what we use the storage context for. And then we are going to use our ingestion pipeline to build nodes, which we will then insert into that vector store index we created with the context. So the way that we did this today, you absolutely do need that step first basically that's telling llama index what they can expect from our quadrant um vector store index all right all right um okay quick uh let's do a couple quick ones here is nodes just a llama index term i mean basically yes uh it's synonymous with like uh or it's close to synonymous with things like document from, you know, from classic NLP. In Lama Index, obviously nodes mean a little bit more than just that, right? The idea is that they represent some standard first class object. Greg already kind of went through that. But the idea is that the way that they mean node, yes, it is a Lama index specific term. All right. And then on data agents, it says the data agent seems to rely on open AI function tool capability, at least the one we demonstrated today. Which other LLM has a similar capability that can replace this open AI function capability? Anthropic? I mean, yes. So there are actually a lot of LLM providers now that provide that kind of, you know, that kind of functionality. So the thing that I would say is like, check your LLM provider, make sure that, you know, if you need it or want it, you have access to some version of function calling, some of them even use the same API, you know, as as, as is obviously the case, just because of the way that it works, OpenAI kind of being quote, unquote, first on a lot of these things means that they actually had a lot of, you know, a lot of people are adapting to the way that they do things. And so you can use even the same thing with some other LLM providers. But I would be, I would always recommend you to check your LLM provider to see what capabilities it has and how it exposes those capabilities to you as a user. All right. All right. Yeah. So speaking of either or, we're getting a ton of questions about, well, what about Langchain versus Llama Index? Let's go ahead and start attacking some of these. Could we have used Langchain instead of Llama Index to do this? I mean, we could have. Yeah, for sure. Could we have done it in the way that we did it with Lama Index? No. So the way that I would think about this particular implementation with Lama Index is that we're basically, we have a bunch of different sources of data. And our OpenAI function calling agent, right, is just choosing which of those sources of data to use. So tools are more akin to sources of data in Lama Index that I would say like, Langchain has a very generic version of a tool whereas these data agents are meant to pick between different piles of data more cleanly, if that helps. So we could have asked the same questions, but the actual like specific details of the infrastructure and the routing of the prompt to different places is gonna be slightly different. That's correct, yes. Okay, all right. And presumably if we're using the same reference information, it's also gonna say Harry Potter's the best, maybe. reference information. It's also going to say Harry Potter's the best maybe. I I'm not going to make that. I'm not going to make that claim. There is there is perhaps some bias introduced by the prompt or you know, it could be anything but it probably would sorry to disappoint those in chat. So just double clicking in on this Richard asked earlier uh generally okay I got Langchain I got llama index I'm doing some agents um how do I choose if I can sort of ask the same questions I would say uh choose whichever one so stick with whichever one your organization is already using is kind of like the cop out here, right? If you are making a decision on which to use, I would say the decision largely comes to what is it you're trying to do? If what you're trying to do is organize some kind of querying system across many different, various data sources uh that that you know have representations in many different ways so databases uh you know pdf blah blah blah you get the point right um then i would stick with llama index absolutely yeah if you're if you're looking for a more generic or or uh kind of free form agent or less strictly, specifically to analyze information across many data sources, I might think about Langchain for that. Okay, next head to head here, Andres. Llama index NL to SQL versus Langchain SQL agent. Is it right to compare these? Can we say one is, are these doing the same thing? Is one better than another today? The answer to your question is actually, which model are you using? And how good is that model at that task? Because under the hood, we're relying on an LL to do this operation llama index and laying chain uh aren't doing anything other than asking the llm to do it and so um if you're you know it really that question is about which llm are you using and I would say like you know gbt35 Turbo or more uh quote unquote or more look at the leaderboards if uh if you're if you're interested what or more might look like but the idea is that's that's kind of the strength quote unquote of model i would recommend or class of models so your anthropics your coheres your open ais but the actual tooling is just asking the model to do it between both of them so uh no no no sincere difference there okay okay speaking of asking models to do it between both of them so uh no no no sincere difference there okay okay speaking of asking models to do things what about open source we saw you know snowflake this week crushing it with the embedding models we got some we got some open source lms that can compete yet with this agentic stuff no um unfortunately not uh when it comes to complex reasoning, you just need a very big model or a very well fine-tuned model. So some shops might have models that work, but generally, if you're going to be using this kind of agentic behavior, it's better to use one of your favorite closed source models. That's not to say it won't work most of the time using something like a Mixtrel or something like a Mistrel, but it is to say that if you need that reliability, if you need that accuracy and complex reasoning and understanding, we're still thinking about closed source today. All right, all right, all right. Well, that about wraps up the time we have today. It looks like we did get another request for the Ragus notebook that we owe folks from last week. So we'll go ahead and make sure that we get that out to you guys. Thanks for the nudge, folks. We definitely appreciate it. Gets busy over here at AI Makerspace. For those of you that had other questions in the Slido, I'm not sure I quite fully understood them. Feel free to drop those into the YouTube comments and we'll get to them asynchronously. But thank you so much for joining us live, everybody. And Wiz, thanks for joining us today for the Q&A and the code demo, per usual. All right, now that you've already liked and subbed, please go ahead and join the AI Makerspace community. If you're not there, we just passed over a thousand folks in Discord, and we'd love to have you join the party. There's lots of stuff going on all the time, and we're continuing to iterate more and more. By the way, we're doing an interview in about 30 minutes with one of our AIM community members that's been successful. You might want to come live and check that out. And if you're ready to start learning for free, a little bit deeper into all these concepts, you might start with our LLMs in Production Cohort 1 course that we open sourced not long ago. And if you want to really, really accelerate and give yourself many, many, many trail heads on which to learn about prototyping and production AI. Check out our AI engineering boot camp. That's our flagship course today. Although we've got a lot more courses in the works coming down the pipe as we stay out on the open source edge in 2024 and beyond. So thank you, everybody, for coming. If you have feedback, please let us know. I know we got a couple of suggestions on future events. Keep those coming, please. And that's it. If you're doing anything this week, don't forget to not only build and ship, but to share. Tag at AI Makerspace if we encourage you to do something this week. Tag me, tag Chris, tag folks in the community. We'd love to amplify you and help build your personal brand, help you achieve your goals as you move forward into your next phase of generative AI career. All right, everybody, that's a wrap. Till next time, keep building, shipping, and sharing, and we will do the same. See you on YouTube live again soon. Bye, guys.
Data Agents with LlamaIndex
3,671
AI Makerspace
20240418
Dive into the future of AI with our groundbreaking event on leveraging agents in LLM applications for 2024! Discover how to skillfully integrate agentic reasoning with advanced techniques like RAG and fine-tuning to architect applications that deliver both performance and cost-efficiency. This session offers an in-depth look at the innovative LlamaIndex v0.10 and its revolutionary approach to AI engineering with "LLM-powered knowledge workers." Learn how to construct complex RAG pipelines that masterfully navigate both structured and unstructured data, leveraging semantic pipelines, NL2SQL tooling, and OpenAI’s cutting-edge metadata filtering. Perfect for AI Engineers and Leaders aiming to enhance their projects’ bottom-line value through smart system architecture, this event is a must-attend to stay ahead in the dynamic world of AI. Click to join us and transform your understanding of AI application architecture! https://lu.ma/llamagents Have a question for a speaker? Drop them here: https://app.sli.do/event/5959y4pV8cT79dKzBs42Hr Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 How'd we do? Share your feedback and suggestions for future events. https://forms.gle/iqwYN9pEUAxUi4Av9
2024-06-25T20:10:16.530971
https://www.youtube.com/watch?v=8tS_84-5Hmo
Hey, Wiz. So do you know, are FANG companies still pretty dope tech stocks to buy? Yes. I think, you know, they got to be good. It's FANG. It's FANG, right? They drive the S&P 500, right? I think so. But there's also like new dope stocks like Microsoft, like NVIDIA. Everybody's talking about these ones, right? And they're top of the market cap. Yeah, I think so. That's right. Yeah. So like those ones feel like no brainers. But to me, from like an AI perspective, it kind of feels like it's not real clear where we shouldn't be investing our money if we want to play the stock market game on FANG. I wonder, you think we could build an AI to help us decide on the best non-obvious FANG tech stocks? I think so. i think probably we could do that yeah we could we could do that okay you think we could use some agents we definitely use some agents yeah we can use like a bunch of agents like multiple agents and really like maybe like even build an extra smart crew of agents to help us out i think we could build an extra smart crew yeah i think we could do that yeah yeah okay so my gut tells me that if i had to pick a fang stock today and put like 1k into it like meta it feels to me like they're crushing the hardest based on what i know about the industry you think what do you think? I, you know what, to be honest with you, uh, I, I'm not quite sure, but the only way that we can really find out is to build a crew of agents, right? So that's right. That's right. I think, I think we should build, ship and share an awesome LLM application to help us out with this. And it does seem like Crew has set us up pretty well to do this. Also worth noting, everybody, financial services firms around the world today are building out specific tools to help analysts, human in the loop, augment their ability to look critically at specific stocks. So through our application today, Wiz, it should be fun to imagine like ourselves as financial analysts, build some AI to help us and even build a whole crew of virtual assistants to see what we can do. Sound like a plan? Sounds like an awesome plan. All right, let's get into it, everybody. Today we talk crew, AI. I'm Dr. Greg. That was the whiz. And it's really interesting to try to find use cases out there today that multi-agent crews are actually leveraged for, that are actually built out for industry. We think we've stumbled upon one by talking to our partners around the industry and what the companies that are coming to them, financial services firms, insurance organizations, or healthcare organizations, what these types of companies are doing out on the edge, we decided to go with a stock predictor tool analyzer today to really root our discussion of multi-agents in something that's relevant to the industry right now. You'll learn all about one of the latest tools that you've probably heard about already, Crew AI. You'll learn about the core constructs and you'll learn about when to choose it over some of the latest tools that you've probably heard about already, Crew.ai. You'll learn about the core constructs, and you'll learn about when to choose it over some of the other tools out in the industry. If you have questions, obviously, jump into the chat and ask. But if you want your questions to get prioritized, please use our Slido link that will drop into the live chat now and upvote your favorite questions. We might not have a chance to get to each and every single one. All right, everybody, let's dig into multi-agent crews with Crew AI today. And as we align our AIM towards the sesh, we want to make sure that we understand the multi-agent app pattern. This kind of brings together all of the core foundational aspects of generative AI. And it's really important to kind of understand the fundamental concepts. Crew AI is just another set of constructs that we can leverage these concepts with. Those core constructs, they do matter. And we're going to take a look at them. And then we're also going to talk about probably one of the questions we're hearing a lot, we're thinking about a lot. When should I use crew AI versus land graph? It's almost becoming the land chain versus Lama index discussion on the multi-agent front. So we'll talk multi-agents, we'll talk crew AI, and then we'll build our FANG stock predictor. And we'll try to see, like, should we be investing in Meta, in Apple? What does the data tell us? We have our own sort of insights and intuitions as people out here paying attention to what's happening at the edge. But what does the data tell us? If we hired experts to help us out with this, what would they say? And what data would they use? So let's talk about multi-agent systems for a minute. You know, agent, when you see the word, it just indicates a specific pattern. It indicates the pattern of reasoning followed by action. LLMs are great at this and sort of thinking about the simple action. LLMs are great at this and sort of thinking about the simple two-step loop, we're going to ask a question or assign our system a task. The LLM is going to decide if it knows how to complete this thing or if it can answer the question. And it might go straight to an output, just like a simple LLM application would. But it also might go and search and find some information. It might use any number of tools that it has access to, to find some additional info that might help out the task it's trying to solve or the question it's trying to answer. And it might loop through reason about the new data, the subsequent question that should be asked, the type of tool we should pick up next, and then it might continue this loop until we get our final answer. So this sort of agent here is just a single agent system. We've done a lot of work talking about agents in the past. If you're new to our channel or if you're new to agents, I highly recommend checking out our For Everyone series. These are shorter, less technically detailed discussions of agents and multi-agent systems. And if you're into development of these tools, check out some of our work that we've done on Langchain and Lama Index, talking about agents and multi-agents. We don't want to cover all that here. What we want to do is we want to sort of give only the essentials and want to focus on the tool of the day, the application of the day. One first bit of terminology is that don't get confused by the agent, agentic, agent-like divide. It doesn't exist. It's not real. We're seeing a lot of people say, well, you know, maybe I like it when I see agent or agentic and it feels less marketing-y. or agentic and it feels less marketing-y. That might be true, but at the end of the day, you can sort of sound smarter by saying agentic, but we're not actually talking about something different. We're talking about this pattern. And when we talk about multi-agent systems, now we have more than one independent agents in our system. And they're connected in some sort of specific way. The way that they're connected and the complexity of the system can vary pretty greatly amongst the type of systems that we're seeing built today. And in fact, there's a distinction that we like to put out there, and that we'll try to talk about today a little bit, that this idea of multiple agents sort of sequentially doing, moving towards a specific task versus a multi-agent, more flexible system where it's more able to solve broader scope problems instead of specific tasks. We'll talk about this in the context of crew AI and also crew AI versus Landgraf. And if we sort of have the image in our mind of a crew, this can be kind of helpful today. Because what is a crew on a boat doing? They are all rowing in the same direction. It's not a complex game where many different crew members are moving in many different directions and all being orchestrated with one another. It's a very synchronous orchestration. We don't want people paddling in opposite directions. We would go nowhere quite fast. And when we talk about multi-agent systems, there's a couple key reasons that we would want these. You know, it's easy to group roles, responsibilities, access to tools. It's a very clean way to be able to separate the prompts that we're using that come together to solve our application. And it's easier to sort of demonstrate and conceptualize what you're building to others, to your boss or to anybody else that you want to share your application with. There's a number of tools that you'll hear about today. One of them that's been around and was kind of early on in this game was a tool called Autogen from Microsoft. And if we look at the description of Autogen, by Autogen, multi-agent conversation framework, build LLM applications via multiple agents. We see this sort of multi versus multiple distinction happening right here already, where it's sort of ambiguous, almost like the agent-agent-like distinction, can converse with one another to accomplish tasks. Okay. Now, if we look at Crew AI, Crew AI says it's designed to, quote, enable agents to assume roles, share goals, and operate in a cohesive unit, a well-oiled crew. And when we get to a tool like LandGraph, this is significantly more abstracted in its language. And I think there's reasons for that, that we'll get into a little bit today, to build stateful multi-actor applications. So really, really concerned with state when we talk about Landgraf. And we can add essentially cycles to any applications that we've built using the lane chain ecosystem. So I want to put forth a simpler way to think about this. If we think about LandGraph, we can think that agents with access to tools can help solve problems. When we think about crew AI today, we can think about agents with access to tools can accomplish tasks. Now, a task is more specific than a problem space. You might call it the task space. And I think this is one key difference between the two tools and the level of abstraction, the level of capability that they're at. So I want to bring Wiz up to talk a little bit about this before we get into the core constructs here. All right. So we've got this idea, first off, of multi-agent versus multiple agent that we continue to talk about, but isn't broadly a thing in the industry. And the idea is like the multiple agent is more like we're all moving in the same direction. It's more linear. It's more sequential. It's less dynamic. You can't sort of go anywhere. If you're any given agent, you're more constrained where the multi-agent system sort of more relaxed on constraints. That's the way I'm thinking about this. How are you thinking about this difference? Yeah. I mean, so for me, the big thing is like, you know, do the agents interact with one another, right? Do they depend on each other's outcomes? Can they evolve closer to, you know, you know, sharing ideas, right? So building off of each other versus a multiple agent system, which is like agent one executes plan, agent two executes plan, agent three executes plan, two executes plan agent three executes plan or you know we have a swarm of agents and so there's 15 agents but they're all doing the same task that's how i like to think about the difference between the two where multi-agent requires some interaction between the agents right uh that that can allow them to be a more cohesive system versus just like, say, a DAG of agents, which is, you know, we could consider multi-agent, but really it's just one agent, then the next agent. So it's just multiple agents in a row. That's right. That's how I like to think about it. And it's not even, as you mentioned, right, it could be a swarm. So it's not even about the sequential in a row thing. It could be parallelized agents doing the same thing, almost like the GPU analogy comes to mind here, right? Which, of course, gets us to multiple multi-agents, right? So if we have, we can just keep stacking the words, you know? That's right. That's right. We could just keep stacking the words, you know. That's right. That's right. And, you know, so if we think about this in the context of the tool that we're looking at today, we're looking at crew AI today. And obviously the image of a crew rowing a boat together, it's very aligned. It's very sequential. It's very focused on a specific goal. And my understanding is that crew AI is sort of built at a layer of abstraction because of that, that sort of focus on one task that allows us to actually write less code. It's like a lower code solution than like a land graph is. But it also limits the total flexibility of the system for agents to depend on one another. Is that right? Yeah, I think I would agree with that. The key insight is less ish code. We still have to write quite a lot of code. And in fact, perhaps the number of lines of code that you write might even be more with something like QAI. However, what you're really spending time writing is the things like tasks, prompts, so the prompt engineering piece. And you're not spending a lot of time describing the flow of process, right? So Landgraf, we're building this flow diagram or or this uh you know whatever you're going to call it anyway it doesn't matter but you're you're building some kind of you know graph obviously of nodes that dictate where to go and what edges to follow when and all of this so you're really being prescriptive with that flow versus crew where we're spending very little time actually writing that flow and a lot more time writing kind of you know making sure we have all of our tools set up to work in the ecosystem making sure we set up our tasks well and that that extra level of prescription especially through like a task layer uh where we're able to give it a task a directive as opposed to a land graph where we certainly can do that to be clear, but that's not, we have to do that separately. Right. I think that makes the, the, the crew AI feel less like you're writing a ton of code and you spend a lot more time thinking about your prompt strategy and how you're prompting them. Okay. Okay. So it's not no code. It's not even low code. It's less ish code. Yeah. Okay. Cause it's still, I mean, we're still doing multi-agent AI systems. Yeah. Right. Like it's going to be some code. You gotta write some code. And we are, we are looking like I, I personally am very interested in when we see sort of a no-code real-deal multi-agent application come out, sort of like the GPT store for multi-agent. I'm not really seeing that anywhere yet. I don't think we're seeing that anywhere yet. That's not crew AI. That's not what this is. Not in this implementation, no. We're absolutely defining tools in code. We're writing code for tools. We're writing code for agent and it is less, you know, less code or it is, you know, not as much code. And maybe the code is even easier to understand, but it works. I think so. Yeah. We're not at the no. Okay. And just on the production level, like, can we go to production with either one of these things or both of these sort of production ready tools? Or should we think about one over another? I would not at this time think of it as a production ready tool above anything else. So like, I wouldn't say like, it's, it's a production ready tool above the rest of our favorite frameworks. But it but also certainly not less. Again, production ready is a big word that has a lot of different meanings. Certainly you could use this in production. That's right. But I would say, you know, we're waiting for maturation for a number of these services in order to really get to the next level of production ready. Production means running and never going down. People are using it all the time. Yeah. I think it's hard enough to find applications that people want to use all the time that use multi-agents today. So yeah. Okay. So the jury's still out on production grade for multi-agent systems in general, but if we're going to have some on building, shipping and sharing, let's go ahead and pick up a tool like CREW AI as we'll do today. Thanks Wiz. Let's check out the constructs and see what he's talking about, the processes being a little more abstracted. The real focus of today and the real focus of, I think, any multi-agent system that you build today has to start with an understanding and a deep respect for the importance of prompt engineering. Now, just recall that the whole idea here is that we want to build systems that are more like humans. We don't require huge amounts of supervised data, you know, to just go tell a child, hey, can you go do this for me and give them a quick instruction? Or even kids talking to one another. Kids can do this kind of thing. And that's why instruction is the number one best practice in prompt engineering, to be clear and specific when you provide instructions. This is nowhere more important than in multi-agent systems as we try to scale out the prompting. It's also very important to give your prompts a place to stand. What's their role? How are they supposed to think of themselves? How are they feeling today? You can get a much different prompt if you say you're in a bad mood and it's gloomy outside versus if you're feeling bright and sunshiny today, depending on how you engage with an LLM. And this is why we see a lot of sort of the initial prompts, the baseline prompts, start off with things like, you are a helpful assistant. Or we can see things like, you are a teacher grading a quiz. This is very common when we have prompted evals and evaluation done just through prompting. So this role and persona is one aspect. Now, another aspect that we won't get into a ton, but will clearly make itself manifest when we talk about tools today is we can also talk about the retrieved context and how that's so important. Obviously, that's the entire basis for retrieval, augmented generation or RAG. And it's not going anywhere. It's very foundational, fundamental. It's as foundational and fundamental to Gen AI as search and retrieval has been to the way we engage with data systems for the past 20 years. Now, when we give input, it's nice to give examples. Sometimes you'll notice that less examples, more sort of chain of thought reasoning is sometimes used in the multi-agent systems, although sometimes examples are used as well. And you'll see a lot of sort of language in these systems that are sort of this think through step by step, sort of give the model time to think aligned with this kind of best practice. And then finally, we also want to be specifying the output. This is very obvious. If you say, hey, I always want JSON output. But if you also say, hey, I want this task to be output to the next task, specifying the output is very, very important. And so what we're kind of doing, whether we're prompting a single LLM or building a multi-agent system is we're sort of prompt engineering. We're wrangling in these LLMs to play nicely together. It's hard enough to get one of them to play nicely with a human. Now we want many of them to play nicely together. It's hard enough to get one of them to play nicely with a human. Now we want many of them to play nicely with each other. Let's remind ourselves, crew AI. Agents with access to tools can accomplish tasks. And so here we are at Constructs. 22 minutes in at Constructs here. We have agents, we have tasks, we have tools, we have crews, we have processes. This is the syntax for crew AI. And we can take a look first at agents. This is the boilerplate code for agents. No, we have role, define role, role persona, best practice for prompt engineering. Backstory, almost an enhanced role, best practice. Goal, what's my goal exactly here? That's almost aligning with the output of what we want from this particular agent. And then tools I have access to. of what we want from this particular agent. And then tools I have access to. This is obviously a very agentic thing and one that we will obviously discuss and is very important to augment our generations with retrieval. What are we retrieving while we're searching some sort of data using tools? So that's the baseline way to think about agents. If we think about tasks, we can check out the boilerplate code within Crew AI here. This is a fun one. The tip section here, return. If you do your best work, I'll give you a $10,000 commission. So we're bribing these LLMs to do a better job. Sometimes that works. You know, I personally like the missionary over mercenary model. But hey, if the LLMs are into it, maybe it works well for them to accomplish these tasks. Let's see what a task description looks like. Task one, description, obviously, do something. Thanks. Best practice, clear and specific instructions. Hopefully, they're a little bit more clear and specific for your use case than they are here in the boilerplate general example code. And then look at this. Do something. If you do your best work, 10k coming your way. Make sure to use the most recent data as possible. Think through the data that you're using step by step. Use this variable, specify input, and also this variable, specify input. Okay, so we're aligning with the best practices of prompt engineering as we're defining tasks. Now, what happens when we go to the next task? Well, of course, we want to do something else. And we want to take the input from task one, again, sequentially in the crew and do something with it. Of course, a little bribery never hurt, especially since we're talking sort of fungible tokens here, you might say. And you can go on and on and on. Do something, do something else, do something else, do something else. Use this variable as input, use this task as input, this is what I want as output, and so on and so on. Pretty straightforward, really, if you think about kind of the baseline architecture that we're using here. When we think about tools, one of the things that's really important is we want to remind ourselves that tools are all about improving search and retrieval. Now, I put this ridiculous image on here, even though you can't read it, because I highlighted rag in yellow. Rag tool, rag tool, rag tool, rag tool, rag tool, rag tool, rag tool, rag tool, rag tool, rag tool. It's just rag tools. What does that even mean? Well, it means it's retrieving stuff. That's it. What are these red underlines? Reading and processing, reading and extracting data, scraping and data extraction, scraping and data collection, extracting data, searches and data sources. Tools are all about giving our system access to additional data that it can go search and retrieve, and it can augment its generations. Each individual agent can augment the generation during each individual task, right? We're starting to stack these ideas hierarchically in our, you got it, you know what's coming next, crew, right? And here we are, where it's very simple once we have these abstractions and we have some of the boilerplate code going on in the background that we actually don't even need to touch to leverage crews. And crews are just saying, okay, well, give me the agents, give me the tasks. Within the agents, I've already specified the tools. And, you know, all I need to do is run this crew. And I need to return the result. Interestingly, we dot run crews in crew AI. In LandGraph, we dot invoke our graphs. And so the idea of running sounds awfully programmatic in nature. This is another sort of way we could think about this. We're simply running through the program. So it's got nice sort of, let's say, guardrails, bumpers on it to keep us in line, keep us in the canoe, within the crew. And so we finally have the processes. And what's really interesting here is that like, we don't actually need to care. The hood's closed. We can open it up. We can look inside. We can look at the source, but we don't have to. The processes, and this is where some of the magic happens. This allows us to work at the next layer of abstraction and focus only on the aspects that are changing things within the way our crew operates. And those primarily are the prompts, quite frankly, and the access to tools and the way we're conceptualizing our problem and trying to solve it at the task level. So this brings us to our stock predictor and our FANG stock predictor. We're going to take sort of the off the shelf example from crew AI that's on stock prediction. We're going to see if we can modify it a little bit, have some fun with it. Essentially, we take quarterly and annual public company SEC filings about their cash and about their goings on board of directors and so on. And we can produce an entire recommendation report for if we should buy, how we should buy, and exactly what we should be thinking about these stocks. We could have a crew that includes a financial analyst, a research analyst, and an investment advisor. That means maybe we're, you know, instead of like paying these people, sort of have our assistants doing this work for us. Let's take a look at the prompts for each of the members of our crew. The financial analyst, its role is you're a professional financial analyst. Oh my God, pretty straightforward. Goal, provide clean, concise, and actionable financial analysis. The backstory. You're an experienced financial analyst with a strong background in finance and economics, focused on providing safe and reliable financial advice to clients. Solid. What about the research analyst? Well, you're now a professional research analyst. You're gonna gather and analyze information to provide insights and recommendations. Instead of actionable financial analysis, we want insights and recommendations. So we want to take the actionable financial analysis and generate insights and recommendations from it. You're a seasoned research analyst with a keen eye for detail and a passion for uncovering valuable insights. Focus on delivering high quality research to clients. And then finally, we did our financial analysis. We did our research, our searching and researching through that financial analysis. What are we ready to do? We're ready for some investment advice. So our private investment advisor is here to provide personalized investment advice and recommendations. As a trusted investment advisor with a proven track record, crushing it in the market, focused on helping clients achieve their financial goals. So let's see how this crew comes together to help us decide, like, should we invest in Meta? Should we invest in Apple? What are the FANG companies that are worthwhile? Let's say I'm going to put $1,000 into NVIDIA. I'm going to put a thousand into Microsoft. I want to put a thousand into FANG. What company should I choose? Let's see what the AI says we should do. Wiz, show us how the meta Apple and FANG stock predictor works. Oh yeah. Okay. So at the base level, the first things first, I suppose what I'll say is that none of this is a legal advice. Okay. So none of this is a, well, none of it's legal advice. So we're building an agent that are, a couple agents to be clear, I guess, that are in this domain. But this is like GME moons, exactly. This is not like real financial advice, right? Anthropic, who's the LLM we're using today, doesn't permit their use for actual financial advice. So please don't use this to, you's the LLM we're using today, doesn't permit their use for actual financial advice. So please don't use this to, you get the, you get the thing. Okay. So yes, none of this is investment advice. Thank you, Aman. Okay. So how does it work? Well, let's just run it and see how it works. So we type Python main.py, a classic, right? We enter the company we want to analyze. Let's choose meta you know what let's choose gamestop because it's funnier uh so we'll go gamestop so you can see here that there's a there's some kind of like uh you know system message working age and professional research analysts starting task collect and summarize recent news articles okay and then it's going to use this to gather comprehensive information about gamestop I should start by searching recent news and financials. So we're going to search the internet for news using the query GameStop recent news stock market. And then when I look through, I can see it's returned some context. Okay, and then it says they provide some recent news about GameStop. I need more specific detailed information. So let's check Yahoo Finance News for GME. There is no recent Yahoo Finance News for the GME ticker. Okay, so let's search the internet again. We get some more information. We're going to try to search the internet for news. We're going to get an error using the tool. We're going to get another error using the tool. And so it's going to decide to not use that tool. So this is the first thing that we want to think about. Number one, if we run into errors with our tools, it doesn't stop working, right? It tries it again. It tries to adjust the way it calls it, and if that doesn't work, it bypasses it. So we have this graceful failure for tools, which is nice. So let's see here. We tried to call with Yahoo Finance again. We call it wrong with the ticker this time. And we try again without the ticker and it fails again. So we decide now we're just going to go to the news. It fails again. And then we finally have a final answer. the uh to the news it fails again and then we finally have a final answer so what is interesting is you'll notice that the first time we tried to call yahoo finance news we did manage to call it correctly and then we had a series of of forgetting how to call it however it does not mean that the tool fails okay, so that's good. So this is the researcher, right? All right, and the query rewrite is just happening. We didn't define it anywhere, as we'll see in the code. Okay, so then we, this is the research. Okay, that's cool. We need to go to the next guy. So we're gonna search the internet. We're gonna look for GameStop. We get no results. So we're going to look for some more data and recent news. Okay, this is great. And now we're going to go to the research analyst. Again, note we were already at the research analyst, right? So that's great. So we're going back to get more research. And now we're gonna go through professional financial analyst. So we're gonna look for the Yahoo Finance News ticker for GME. There's no news because they do it wrong, they do it wrong again, they try it again, there's no news. This is classic, right? So the basic idea here is that the search results are not returning the GME news, likely because the SEC is keeping them down, right? That's what we'll say. But we do have access to results that we've pulled outside, right? So we still have a bunch of results. We can still make a recommendation. So we'll go through, we're going to continue, it keeps going. The idea is blah, blah, blah, blah, keeps going, keeps going, keeps going. And eventually, we're going to result with a report. Of course, we've already pre-run one. We used meta here because metas got more news and things like yahoo finance uh and the google news category uh the idea is here for the meta report we wind up with this investment recommendation report meta platforms inc executive summary based on thorough analysis metal and on and on the idea is that we have the ability to generate a full report uh for investment for Meta and who maintain a strong buy recommendation for Meta. Too bad this isn't investment advice, but that's the idea, right? So, okay. So what are we going to do? We're going to look at how we actually implement this in code. So we're going to start by looking at our actual kind of file structure. So we can see here that we have our crew stock analyzer aims, and this is all, to be clear, this is all available on GitHub through the link that should have been provided in chat. But the idea is we have some tools in our tools folder. Our tools are calculator tools, search tools, and SCC tools. And then we have the main.py, which is what we ran, Python main.py. We have some stock analysis agents, Python file, and we have some analysis tasks.py, which is going to be our tasks. So this is our agents and our tasks, our main, and then our tools. So we'll start by looking at our main. What does main look like? Well, basically, it's where we set up our crew. So you'll notice that we can provide a company. That's what the, you know, that was the idea of letting the user choose a company. And then we're going to give it some agents and some tasks. We're going to create some agents and some tasks. We're gonna create specific agents from our agents class, and then specific tasks from our task class. And then we're gonna set up a crew with those agents, those tasks verbose equal true, so we can see all the print offs. And then we can run.kickoff..kickoff is the thing that lets us kind of, you know, just run the result. And this is going to happen when we use dot run. So dot run will call dot kickoff. You'll notice that while we did give it some agents and we did give it some tasks, that's all we did. We didn't specify any flow. We didn't specify how it should approach the problem in this code at all. Okay. So that's important to keep in mind. So let's look at the agents then that we set up. So for the agents, we make a class, which is going to hold all of our specific agents for this task. We're going to initialize it with a quote-unquote brain, right? This is the language they use. The idea is this is like the main LLM that's going to be used for these agents. We're going to use chat-anthropic. We're going to use Claude35-sonnet. So this is Claude's recent release. It's a great LLM. Makes sense to use it. Okay, then we're going to create the first agent. And you'll notice for an agent, all we do is we provide a role, which is what it is, a goal, which is what it's trying to do, a backstory, which is where we can provide more interesting and nuanced details about, you know, how it should operate. I'm going to zoom out one text or the text a little bit less readable, just so we can see, you know, a little bit further into the line here. We're going to say it's verbose. We're going to choose the LLM that we created above, which is our chat anthropic. So you'll notice that if you created more than one LLM here, we could address it here. We can have different LLMs from different agents. It's up to you. And then we're going to give it some tools. In this case, we're going to give it search news, calculate the finance tool and search 10Q. Okay. And then we're going to create a research analyst agent. And in that agent, we'll give it a role, a goal, a backstory. Verbose is just so we can see all the outputs and then we'll give it a brain and we'll give it access to some tools, right? So this is it. This is a crew AI. We're just making agents. We give them a role, a goal, a backstory. We give them a brain and then we give them a roll a goal a backstory uh we give them a brain and then we give them some tools that they have access to and we do the last thing as well which is for our investment advisor we do the same thing we give it a roll a goal a backstory we give it a brain and then we give it some tools that it can use and that's all we have to do with crew ai which is pretty awesome right so you'll notice it's very straightforward python we're just returning an agent the agent is built with these these things and that's all we have to do. Okay. So let's look at the tasks. Well, for tasks, it's the same kind of level. We build a task class, which is going to hold all of our classes. And then we're going to say, here's the research task. And we're going to describe the research task and we're going to give it this tip section, right? So we'll look at what that is in a second. And we'll tell it the company that the customer selected. We're going to give it an expected output. So the reason this is important is because it helps us ensure that the output is along the lines of what we want, right? Without doing this, we're kind of hosing ourselves. And then, of course, financial analysis. Financial analysis is just another task where we have to do financial analysis. We haven't expected output. Recommendation, same thing. And then our tip selection, right? We have, if you do an amazing job, you'll receive $25,000. And we had some chat, right? That was, does this really work? It can. It's low-hanging fruit. It's something that you can try. But the idea is that LLMs can be incentivized by fake money, which is hilarious. Okay. So let's look at our tools then. So we have the calculator tool, which is dangerous, right? Because it's just calling eval. So make sure you're using this responsibly. But the idea is that it just calling eval. So make sure you're using this responsibly. But the idea is that it just evaluates code. Then we have our search tools, our search tools, search the internet using SERP API, or SERP API, which we just set up here. You know, we get the searches, we return them. If there are no results, it's gonna error out, but it doesn't matter because the agent gracefully handles our errors, which is nice. And then we have our search news. This searches Google news category, as opposed to just the regular Google search, which can be useful. We have that there. And then our SEC tool, this is secretly rag. So what we do is we, you know, what we do is we basically give it a query and a ticker, right? And then we search for the SEC API to grab what's called the 10Q, which provides financial information in a parsable output. So it's just HTML. You don't have to worry about anything else, which is very nice. And then we're going to do this embedding search thing and return the answer. And if you know RAG, you know embedding search, hey, that sounds super familiar. And we look at embedding search. What do we do? We grab some text. We turn it into a big blob of string. So we grab like a bunch of text. We turn into a big blob of text. And then we split it. and then we get some embeddings, and we build a vector store, and we retrieve responses that are relevant to the query, right, so that straight up just rag, we just inject some rag into the system, because of course, why wouldn't you, and so what this does is it searches through those 10 queues for the most relevant information. Easy peasy, lemon squeezy. Okay. So I'm going to stop sharing this screen and I'm going to start sharing another screen where we're going to look into a little bit more about the actual library itself. So we can see, because it doesn't, you know, in our GitHub, we're not really, So we can see, because it doesn't, you know, in our GitHub, we're not really, we're not defining any process, right? We're not defining any ways to move through tasks. We're just kind of saying stuff. We're giving agents. We're giving tools. You know, the tools will fetch info and then return it to the LLMs. We build tasks. But how does this actually tie together? Well, basically, behind the scenes, what we're gonna do is we're gonna use some of these processes. The processes are fairly straightforward. As you can see, we have sequential and hierarchical. Then we can look into, okay, so what does that actually mean, right? Because that's not really telling us a lot. And we can look into our agent class and see, you know, behind the scenes, what's happening here. We, we have a bunch of flow that's actually set up for us. So here's our agent executor. If you remember from Langchain, uh, you know, uh, it's, it's, you know, we have the ability to execute our tasks and then return information and then add it to a description or a scratch pad that's shared between the agents, right? We have, again, this agent executor idea with its prompts. We have kind of everything that you would expect, right? And you'll notice very kind of common language from langchain here. And the idea is that behind the scenes, right, this is just building all of the stuff that we would build, if you see here, like, in parse tools, right? It's building all of the things that we would build if we were building this in lang graph, but we don't have to because it's being done for us. So the idea is, right, when we look at, say, in the tool handler, we can see that this is just being used to get that tool. They're also using some smart caching on the back, which is really nice, going to prevent us some cost, not a ton of cost, but some cost. But the idea is that this is all being handled away from us, right? So it's not like we're not, it's not like there's not a flow that happens behind the scenes, right? But it is the case that we don't have to come up with it. We don't have to worry about it, that crew AI is handling that all for us, which is awesome. Okay, what I'm going to do is I'm going to do the staple, ask you guys, you know, if you're enjoying our events every Wednesday, please like, comment, subscribe, ring the bell. You know, I know it's silly, but it does help us and make sure we're doing the events that you guys like. And with that, I'm going to pass us back to Greg for some discuss. Yeah, okay. that you guys like uh and with that i'm gonna pass us back to greg for some uh discuss yeah okay so it seems like there was actually lang chain happening under the hood is that right oh yeah yeah i mean really i guess if you think about it's not all that surprising right what would you choose to build this with if you were trying to put together a new tool so the layers of abstraction are increasing very cool to see that and just we got a new tool. So the layers of abstraction are increasing. Very cool to see that. And just, we got a lot of questions in the chat. So let's kind of get through this. What's the bang box of today? Well, it seems like Apple's a hold right now. The AI tells us Apple's a hold, cautious buy. I tend to agree, you know, and they're sort of basing this on. I don't know. Meta was a strong buy. I tend to agree, you know, and they're sort of basing this on. I don't know. Meta was a strong buy. Meta, moderate buy. Meta, yeah. And you know what? That aligns with my feeling, but it's interesting when I dig into the report, right? Meta says, the report on Meta says, you know, it's an investment opportunity for those willing to accept the risks blah blah blah of a high growth tech company the company's strong financial fundamentals dominant market position and future growth potential in ai and the metaverse right because again it really matters we're pulling 10K from last year. We're pulling 10Q from Q1. Meta's moving really, really quickly. And we haven't actually had a chance to necessarily update the cycle of SEC filing docs in a real super meaningful way. We're still talking about the metaverse here. I'm pretty sure they're not really all about that metaverse anymore. So it's important to use this as a tool to help you, the human today. And I think this is what the financial firms are doing. So I think, you know, at the end of the day, I'm maybe throwing my thou into meta. I don't know. I don't know about you, Wes, but... I'm going GME to the moon, you know? u.s but i'm going gme to the moon you know and then the last thing the last thing i want to do is is i want to just point out like it seems like prompt engineering role goal backstory and rag like all the tools were search and retrieval are still pretty important here with these systems right like maybe they're almost like the whole thing i mean it it's all rag it's all right all the way down all the tools do is they retrieve information and then they augment the prompt right so it's like it's like it's like uh we can twist it all to be rag if we really put our minds to it. Yeah. Okay. So maybe we are once we've achieved this multi-agent ninja status, just telling AI to be nice and hitting it with some rag. Okay. So remember the multi-agent is more that collaboration, more that flexibility. The multiple agent is more about that defined process. That's sort of what we're putting out there. Let's see if we can get it to stick in the industry. It's all about these grouping of tools and responsibilities, the separation of prompts and conceptual models. This is what the multi-agent setup helps us with today. It's hard to find use cases still that these are really practical in industry, but we saw that with Crewai, we got some fundamentals, agents, tasks, tools. It's all about agents with access to tools can help accomplish tasks. And even some long chain under the hood. So with that, we're going to start digging into questions, guys. Let's see how far we can get into this. Okay. So upvoted big time here. So how exactly does it make sure which agent uses which tool? Great question. So we tell it the tools that it can access. So what this means is like we provide a list of tools that it has access to. So that's how we ensure it. It can't call tools outside of that list, even if they're present in the application. Okay. Okay. Okay. And then from the tool, we've got sometimes LLMs generate the wrong tool name. How can we prevent that? Yeah. So I responded in chat, but I mean, prevent? There is no way. Mitigate. This happening is through good prompting strategies and things like few shot examples and basically showing the LLM how to call the tool properly is going to go a long way. What's interesting about Crew AI's implementation is it will retry the tool call and sometimes actually gets it right on the second try. Right. And then, so now we have like, we still get the info we want, but it's not, and it does look at like the error to try to determine what it should call with. So we can add that context there if it's wrong, but there's no way to prevent it. Unfortunately, we just have ways to mitigate the amount that it happens. Shots on goal and then good prompt engineering, right? I mean, that's what we're talking about. View shot, very specific, clear instructions on and on. So anonymous asks, how does it assign the right? I think this is the same question. how does it assign the right? I think this is the same question. How does it assign the tool to the correct agent? We're actually defining this out of the box, right? We hard code it. In the definition of the agent, each one, we give it access to specific tools. We give it a role, a goal, a backstory, a brain, and some tools. Yeah. Okay. So next question. How much does C crew AI bloat my agent app? Also, how does it compare with other approaches to building an agent and performance? I guess we talk about auto gen, but you talk about land graph. Yeah. It's a, it's a lot of bloat. I mean, it is. So, uh, we, we, we had some good chatter, uh, you know, during the, the, the kind of code part where people are talking about like, I have a ton of LLM calls and they're this and that. I think platforms like Cray do that. That's part of their thing. It is pretty bloated. But the idea is because it's so bloated, because there are so many LLM calls, typically we get to a a great final answer versus if we keep things lean and tight like like we can with land graph we might not get as uh as great of an answer at the end which makes sense right if we have less information gathering if we have less tries at getting the right answer we might wind up with uh with a wind up with a worse answer, but it'll be a lightweight response. So there's definitely situations where you can imagine either. In this example with financial advisement and stuff like this, and not that it's real financial investment advice or whatever, but it is good to take our time, do all of the research, get a bunch of numbers, figure things out, compare things with competitors, right? Versus like, if what we want is to route someone to the right tool and have it make an API call on their behalf to set up a calendar event, like we're not, you know, that isn't going to require the depth of, of review and process. going to require the depth of of review and process and so we can keep that lighter weight and unfortunately with any multi-agentic system we always run risks of them just kind of getting stuck in a in a wheel turning situation where we're not progressing and this is why platforms like crew ai and landgraf have have automated depth or recursion limits where it's like you can't just sit here forever. Eventually, you have to go to output mode. That's right. There you go. Yeah, and I think that bloat makes it a lot easier for folks that may be newer to building production coding systems. It allows them to start building with this thing as a nice step beyond some simple agent tools, you know, because there is that sort of, it retries for you. It doesn't let you sit there forever. All of the sort of, you know, cases where these are kind of hard to handle if you've never built these kinds of systems before. It takes care of that nicely. So a good starter and, you know, in your multi-agent starter pack. So Mahmood asks, would you please recommend some GitHub repos for multi-agent RAG system using the financial sector? I don't know, Mahmood, but I do think that I'd love to activate our community on this and see if we can come together. I think this is an important space. And I think that, you know, fundamentally the financial services firms are not very inclined to put these things out open source so i mean i'd love to see more of this kind of thing come out from our cohorts and our demo days i know there are some folks out in industry that are very interested in this if you know of any please throw them in the chat i don't know whiz do you know of any off the top of your head? No. Yeah. The one that we just built, start there. But, you know, I think outside of that, it is a use case that people are caring about. So you can imagine people have gone pretty far with it. This is a great starting point. And so it seems like as the models get better, they will do their own planning, generate their own graph. Multi-on, et cetera, seem to be going in that direction. Crew AI land graph support question mark. Does this question make sense to you? Yeah, I think the, the idea of like letting the LLM be part of the actual planning stage and making graphs to solve problems. So like doing the process work that we would typically do in a manual fashion with, say, line graph. I do think that that's something that is going to be more and more adopted, especially for simpler use cases. We still run into the same, it's actually the same problem, right? Like, like, the first time, let's say, it's this is exactly the same problem, which you have to figure out, how do I approach this situation? Right? How do I solve this problem? In the first time that you're doing that, typically, what you're gonna do is be solving the problem, but leaving behind an artifact of the path you took to get there, and then letting the next cycle review if that path was good, based on the success, yada, yada, I think that's where we're gonna get into where we kind of uh it feels more like your traditional safari hat wearing machete wielding jungle guide who's chopping through the long grass and someone comes in the next time and they notice hey actually like you missed a pretty short path here. So I'm going to chop this way. Like, I think that's, we're going to start seeing more trailblazer esque application flows as we mature these tools. Yeah. Industry specific ones too, to the financial analysis point. Right. And because I mean, there are roles that are real. And if we want to put humans in the loop to develop these, we need to start with current processes that exist today. And, you know, I think this is one of the most exciting sort of areas of building projects and doing consulting for us and seeing what people build and bring into our cohorts that we get to see. And a lot of people are thinking about this today. Nobody's really quite sure how to do it in a general way today. And okay, so Aman asks, actually, I really like this question. I don't think we put a sharp enough point on it. Can you please explain multi-agent versus multiple agent, Landgraf versus crew, which one falls in which bucket? I think this is an important question. It's not that straightforward, but my answer is like, if you must put one in one bucket, one in the other, then put crew AI and multiple agent, put Landgraf and multi-agent, but Landgraf can absolutely do multiple agent problems. Okay. Crew is a little bit more constrained. You're going to have a little bit harder time sort of modifying that to do a truly collaborative, big mix-it-up multi-agent system. But as we saw, it's kind of laying chain under the hood. So you could if you wanted to. I don't know what's your take on this, Chris. So typically, so Crew AI typically implements multi-agentic flows, Landgraf. We make the decision. So I think like Crew AI fits neatly into a box, LandGraph fits into whatever box we designed it to fit into. Multiple agentic flows or multi-agentic flows? Multi, multi, multi, where they collaborate together to come up with the final response. Yeah. But I think the big thing is like LandGraph doesn't fit into a box unless we design it to fit into a box. Yeah. But I think the the big thing is like laying graph. Doesn't fit into a box unless we design it to fit into a box. And so that's that flexibility. Just looking for state stateful complexity. Good old states. OK. Yeah. So actually, we're sort of at another layer of complexity when we talk about Landgraf versus CRU versus the layer of complexity when we talk about multi versus multiple. Just last question here. Let's do two quick ones. I know we're at time, but is this fair enough for Amman to say? Can you clearly define what is agentic and what is not? I feel like, quote, it's a looped behavior where a decision must be made on the output produced. What do you think? I'm not going to try to define agentic. No, no, no. What do you think about his? I don't think that's terrible, Amon. I think you can start with that. That can be agentic. Go read the reasoning action paper, the orig, the OG, and then let's keep the discussion going as we have more agentic events here. And then finally, chat GPT assistance API where everybody wants to understand where all these tools fit in. So just maybe address chat GPT assistance and the tools like, well, what about auto gen? Right. And it's like, okay, so how are we thinking about this landscape today, right now, June 26, 2024? Uh, yeah, I mean, they're all, they're all solving different formations of the same problems. They can all solve kind of the same problems. The actual technologies, the way we interface with it might be different, but the actual, like, if you just look at what the LLM sees, right? If you, if you remove everything and you just look at the inputs, the LLM receives at the end of the day, the inputs are going to look fairly similar, which is here's some information. What should I do? Here's some information. What do you think about that, etc. So it's, you know, the way I would think about these is it's like ways to design systems around what is ultimately the same core concept. And the way that you prefer to interface and the level you prefer to interface that, right? With laying graph being very, you know, very strict and giving you a lot of process control, crew AI being very prompt focused, not giving you a lot of that process control, Lama index, right? Requiring you to build some additional tooling in or or approach it from a more data focused perspective perspective right but there's it's the same thing just the way we it's it's exactly like every other framework difference or every other language difference right if i talk to you in french and i try to tell you to have a good day versus English and I try to tell you to have a good day, they're going to sound completely different. They're going to mean the same thing, right? So this is the way that I would think about them today. Nice, nice, nice. Okay. So it sounds like maybe Autogen fits in as we continue our sort of investigation into multi-agent systems and tools. And then, Wiz, we are covering agent ops next week. So join us next week. We're going to sort of look at what the operational level on top, sort of LLM ops for agents looks like with a really cool new tool. And we'll be having some of the founding team join us for that one. That's going to be a banger. Check that one out, Wiz. Thanks so much for showing us the way through crew today. And so with that, we've asked you to like and sub. We've thrown the community into the YouTube chat. If you're not in Discord, we'd love to see you in there. Get started building, shipping, and sharing today. And if you're ready to really accelerate your career and your understanding of prototyping and production, productionizing these AI engineering systems, please do check out our AI engineering bootcamp. It's our flagship course and product on Maven, top rated course. And we've got a lot of great people in the current cohort and people already signed up for the next cohort. If you sign up early, you'll be immediately put into the AI engineering on-ramp that will give you access to bite-sized learning each week about transformers, about fine-tuning, and about alignment to help get you on the path to make the most out of your 10 weeks spent with us starting in August. So you got some time. Think about it. Hit me up. If you have any questions, I'll be happy to respond to you guys directly. Hope to see you all next week at AgentOps and let us know if there's anything that you'd love to see from us in the future. We always send out a little feedback form and we'd love to hear what you have to say about this event and what events you want to see moving forward. Until next time, keep building, shipping, and sharing, and we will most certainly do the same. Have a great week, everybody. See you on YouTube Live next Wednesday. Bye, everybody.
Multi-Agent Crews with CrewAI
3,891
AI Makerspace
20240607
In this event, we'll explore how to define, build, and operate agents and crews—a collaborative group of agents designed to execute tasks efficiently. Learn the art of agent collaboration, strategy formulation for task execution, and the orchestration of these autonomous units as they work together like a well-oiled machine. We'll also compare the CrewAI framework with LangChain’s LangGraph, providing you with a comprehensive understanding of how these technologies stack up. Whether you're an AI enthusiast or a professional looking to enhance your skills, this session is packed with practical insights and live interactions to answer your questions. Join us to unlock the secrets of building, shipping, and managing agentic systems with CrewAI. Click now to be part of this groundbreaking exploration into the future of AI! Event page: https://bit.ly/crewofagents Have a question for a speaker? Drop them here: https://app.sli.do/event/49UemZaZDitJjS3bK7Fe7F Speakers: Dr. Greg, Co-Founder & CEO https://www.linkedin.com/in/gregloughane The Wiz, Co-Founder & CTO https://www.linkedin.com/in/csalexiuk/ Apply for our new AI Engineering Bootcamp on Maven today! https://bit.ly/aie1 For team leaders, check out! https://aimakerspace.io/gen-ai-upskilling-for-teams/ Join our community to start building, shipping, and sharing with us today! https://discord.gg/RzhvYvAwzA How'd we do? Share your feedback and suggestions for future events. https://forms.gle/nj5ZYsM9vG2KTzVv9
2024-06-26T12:34:02.861036
https://www.youtube.com/watch?v=tsTeEkzO9xc
Welcome to the closing ceremony of UC Berkeley's AI Hackathon. I want to call on stage the awesome, incredible Executive Director of Skydeck, Caroline Winnett. Thank you, Rene. Hi everybody! How you doing? Awesome! You ready to hear who won the hackathon? Yes, you are. How many hackers here? How many in the audience? Oh, nice. Very good. All right. We're going to get started because I think you want to hear Andre. Yes? You want to hear Andre? Yes, you want to hear Andre. All right. Let's quick run through. You want to hear some cool facts about what has been happening? This is what we're going to do today. We're going to get to our pitches soon. This is some pictures. All you hackers, did we have fun? Did we have a good time? I had an absolute blast. And yes, there were llamas, for sure. I was there most of the time. I was not there at 3 a.m. But I was so impressed with all of you. You hacked your hearts out. And I'm so proud of all of you. Whether on stage or in audience, you're all completely awesome. All right, how many people it took to make this happen? This giant number, 371, UC Berkeley Skydeck, which I represent, and Cal Hacks, educational program and student organization, so I think we did a prettyacks Educational Program and Student Organization. So I think we did a pretty decent job of getting this all together. This is how it breaks down. Hackathon at Berkeley directors, Skydeck staff, sponsors. We're gonna give some love to sponsors. As I mentioned, we're an educational program. CalHacks is a student organization. This is all because of the sponsors. So we're going to give them a ton of love when they come up on stage. You with me? Awesome. Okay, 140 judges, a hundred plus volunteers, and 80 mentors hanging out helping everybody. Let me tell you a bit about Skydeck. Who hasn't heard of Skydeck? Anybody? A couple of you. Skydeck, UC hasn't heard of Skydeck? Anybody? A couple of you. Skydeck, UC Berkeley's flagship accelerator. We host 250 startups a year. Our accelerator track gets $200,000 in investment from our dedicated venture fund. Pretty cool. Let me tell you about Berkeley Skydeck fund, our dedicated venture fund, investing in about 40 startups a year. That's a lot of startups for dedicated venture fund, investing in about 40 startups a year. That's a lot of startups for a venture fund, by the way. The 200K investment. And who wants to apply to Skydeck? July 16. I want to see all of your startup applications coming in. That's in a month. And hackathons at Berkeley are amazing student organization, truly extraordinary people who helped put us on this event. This is, of course, what they do. They do hackathons. They've been doing it for 10 years. They do about 2,500 students a year. And of course, they reach a ton of universities. How many people here not from Cal, hacking not from Cal? Fantastic, welcome. Berkeley is a place where we bring great talent in. How many people here not from Cal? Hacking not from Cal. Fantastic. Welcome. Berkeley is a place where we bring great talent in. Y'all are great talent. We brought you here. That's what we do. That's what Berkeley Hackathons does. Come to their 11th big hackathon in San Francisco in October. Check them out on social media. Get on that LinkedIn, all of that. Okay? Who's coming to San Francisco y'all coming yes okay fantastic all right thank you to our partners all of you who brought your hackers here including our friends down in the South Bay thank you for joining us and all the other great universities fantastic really happy to have you you want to have you. You want to hear Andre? Do you want to hear Andre? Yes! Please give a huge round of applause for our keynote speaker, founding member of OpenAI. I need the applause. Come on, keep going. Andre, come on up. Carpathy. Yes, big applause! Thank you. Hi, everyone. So thank you for inviting me. It's really a great pleasure to be here. I love, love, love hackathons. I think there's a huge amount of energy, a huge amount of creativity, young people trying to do cool things, learning together, creating. It's just like my favourite place to be and I've had my fair share of hackathons, so really a great pleasure to be here and talk to you today. So one thing is, this is bigger than I expected when they invited me, so this is really large here. I kind of feel like actually the scale of the hackathon is quite large, and I guess like one thing I wanted to start with is that, just in case you're wondering, this is not normal for AI. I've been in AI for about 15 years, so I can say that with confidence. And it's kind of just grown a lot. So for me, AI is a couple hundred academics getting together in a workshop of a conference and talking together about some esoteric details of some math. This is what I'm used to. This is when I entered AI, about 15 years ago. Say when you're training neural networks, you're working with these tiny digits from MNIST, you're training a restricted Boltzmann machine, you're using contrastive divergence to train your network. And then you're scrutinizing these on your first layer to make sure that the network trained correctly. And I know none of that makes any sense because it's been so long ago. But it was a different vibe back then, and it was not as crazy. I think things have really gotten out of proportion to some extent. But it is really beautiful to see the energy. And today, 15 years later, it looks a lot more like this. So this is, I guess, where AI is today, and that's also why this event is so large, I expect. So yeah, NVIDIA, the manufacturer of GPUs, which is used for all the heavy lifting for our neural networks, is now the most valuable company in the United States and has taken over. And this is the day that we live in today, and why we have so many hackathons like this and so on, which I think is quite amazing, but definitely unprecedented. And this is a very unique point in time that many of you maybe are entering the AI field right now. And this is not normal. It's super interesting, super unique. There's a ton happening. Now, I think fundamentally the reason behind that is that I think the nature of computation basically is changing. And we kind of have like a new computing paradigm that we're entering into. And this is very rare. I kind of almost feel like it's 1980s of computing all over again. And instead of having a central processing unit that works on instructions over bytes, we have these large language models which are kind of like the central processing unit working on tokens, which are little string pieces instead. And then in addition to that, we have a context window of tokens instead of a RAM of bytes, and we have equivalence of disk and everything else. So it's a bit like a computer, and this is the orchestrator, and that's why I call this like the large language model LMOS, and I've And I've tweeted about this in some more detail before. And so I see this as a new computer that we're all learning how to program and what it's good at, what it's not as good at, how to incorporate it into products, and really how to squeeze the most out of it. So that I think is quite exciting. And I think maybe many of you have seen the GPT-4 demo that came out from OpenAI three weeks ago or something like that. And you're really starting to get a sense that this is a thing that you can actually talk to. And it responds back in your natural interface of audio. And it sees and hears and can paint and can do all these things. I think potentially many of you have seen this movie. If you haven't, I would definitely watch it. It's extremely inspirational for us today, movie Her. And actually, kind of presently in this movie, when this main character here talks to the AI, that AI is called an OS, an operating system. So I think that's very present from that movie. And it's a beautiful movie, and I encourage you to watch it. Now, the thing is that in this movie, I think the focus is very much on the emotional intelligence kind of aspects of these models. But these models, in practice, in our society, will probably be doing a ton of problem-solving in the digital space. And so it's not just going to be a single digital entity that kind of in some weird way resembles a human almost, in that you can talk to it, but it's not quite a human, of course. But it's not just a single digital entity. Maybe there's many of these digital entities, and maybe we can give them tasks, and they can talk to each other and collaborate, and they have fake Slack threads, and they're just doing a ton of work in the digital space, and they're automating a ton of digital infrastructure. Not just digital infrastructure, but maybe physical infrastructure as well. And this is kind of in earlier stages, I would say, and will probably happen slightly lagging behind a lot of digital innovations, because it's so much easier to work with bits than atoms. But this is another movie that I would definitely point you to as one of my favorites. It is not very well known at all. It's called iRobot, and it's from 2004. Will Smith, amazing movie. And it kind of explores this future with like human robots doing a lot of tasks in society. And kind of spoiler alert, it doesn't go so well for these people in this movie, and the robots kind of take over a little bit. But I think it's kind of interesting to think through, and I definitely would encourage you to also watch this movie. And this movie takes place in 2035, allegedly, which is 10 years away. And so maybe in 10 years, you can definitely squint and think about that maybe we are going to be in a place where these things are walking around and talking to us and performing tasks in physical world and digital world. And what does that look like? What does that mean? And how do we program them? How do we make sure that they sort of do what we want them to, et cetera? So when you put all this together, I think the feeling that people talk about often is this feeling of AGI, like do you feel the AGI, quote, unquote. And what this means is that you really intuitively understand the magnitude of what could be coming around the corner if this stuff actually continues to work. The amount of automation that we can potentially have in both the digital space and the physical space. Now, I don't know about you, but I actually find this picture kind of bleak. This is what came out when I put a bunch of the last few minutes of talk into an image generator. And I don't actually like this picture. I think we can do better. And, you know, we have a few thousand people here. You're about to enter the industry, and you're going to be working on a lot of this technology, and're going to be shaping it and you'll have some active sort of power over it. So I don't know, maybe we want this to look something like this. I mean, this is what I would like. So this is humans, animals and nature coexisting in harmony. But secretly, this is actually a high tech society and there are robots and quadcopters and there's a ton of automation, but it's hidden away and it's not sort of like in your face. So maybe this is something that we want instead. And you should feel a lot of agency over what you want the future to be like because you're going to build it. So maybe we can agree right now that this is better than the previous picture. But I don't know about you, but I would hope so because I'm going to be living in that future I think. So the question for this hackathon, I mean a lot of you have worked on a bunch of really cool projects over the last day or two. And the question is, how do we go from hacking to actually changing the world and building this future, whatever that may be for you? And so what I thought I would do in this talk is go over maybe like my last 15 years or so in the industry. And I think I had a bit of a window into how projects become real world change, and I have some takeaways and things like that that I maybe wanted to talk about. So the first thing that I find really incredible is how projects that are sometimes very small projects, like little snowballs, can actually snowball into really big projects, and just how incredible that is to watch. So as an example, I have my fair share of hackathons, like I mentioned. These are some projects from a long time ago that I worked on over the last 15 years or so. So I had a little Rubik's Cube color extractor. I put up some game programming tutorials on YouTube like 13 years ago and tried to teach people programming for games. I had video games and a lot of them. I had this like kind of janky neuroevolution simulator, which was kind of interesting. And unsurprisingly, not all of these projects actually go on to Snowball. A lot of this is just exploration. You're tinkering. And so actually, these three projects didn't really go anywhere for me. I wouldn't say that it was really wasted work. It was just like it didn't add up to and didn't Snowball, but it was still helping me along the way. I'll come back to that later But the game programming tutorials actually ended up snowballing for me in a certain way Because that led me from game programming tutorials to a bunch of Rubik's Cube videos actually that became kind of popular at the time And this kind of sparked an interest in teaching for me. And then when I was a PhD student at Stanford I Got to teach this class cs231N and got to develop it and teach it. And this was the first big deep learning class at Stanford, and a lot of people have gone on to like this. And then after that, I ended up making another YouTube channel, which is my Zero to Hero series for deep learning and LLMs. So a lot of people like that as well. And then on top of that, continuing the snowball, the project I'm currently very interested in is this next class and what it could look like and how I can make it better. And I'm calling that LLM 101N, and it's about building a storyteller, something like kind of a chat GPT that you can work with to generate stories. And the idea is you build everything from scratch, from basic prerequisites all the way to like kind of a chat GPty clone in the domain of storytelling. Building that from scratch will be really instructive, could be really fun. I only published this on GitHub two or three days ago, so it's pretty raw and still very much in the early stages, but I'm really excited for it, and this for me is an example of a snowball. It started with 13 years ago, a little game programming, and I'm working on a course that I think will be really interesting. Thank you. Another example from my life, I think, is the snowball that I've witnessed with OpenAI. So as was briefly mentioned, I was a founding member, researcher of OpenAI, and so I was there seven years ago. These are some images that are public of what it was like working out of Greg's apartment, like eight of us. Open AI was founded to be a counterbalance to Google. Google was like this gorilla with 70 billion free cash flow. Back then, Google employed half of the AI research industry almost. It was kind of like an interesting set-up, I would say. We were just like eight people with a laptop. So that was really interesting. And very similar to my background, OpenAI ended up exploring a large number of projects internally. We hired some really good people. And many of them didn't go too far, but some of them really did work. And so as an example, here's a project that was in an early stage, a very small snowball in the early history of OpenAI. Someone worked on a Reddit chatbot. And if you come by their desk and you're like, what does this look like when someone's working on a Reddit chatbot? We're trying to compete with Google, and you're working on a Reddit chatbot. We should be doing something bigger. And so it's very easy to dismiss these small snowballs because they're so fragile. These projects are so fragile in the beginning. But actually, this Reddit chatbot, and by the way, don't read too much into the specific details. These projects are so fragile in the beginning. But actually this Reddit chatbot, and by the way, don't read too much into the specific details. These are kind of like random screenshots, just for illustration. But this was a Reddit chatbot and it looked naive. But actually, Reddit chatbot, what is that? It's a language model and it happens to be trained on Reddit, but actually you could train a language model on any arbitrary data, not just Reddit. And when the transformer came out, this was spun into something that worked much better. And then the domain was expanded from just Reddit to many other web pages. And suddenly you get GPT-1, GPT-2, 3, 4, and then you get GPT-4.0. So actually this Reddit chatbot that was so easy to dismiss actually ended up leading and snowballing into GPT-4.0, which we currently think of as this change in the computing paradigm. You can talk to it and it's amazing. So it's really incredible for me to have witnessed some of those, I guess, snowballs. And today, OpenAI, of course, is worth maybe somewhere just below $100 billion or something like that. So really incredible to see some of these snowballs in practice. So I would say a lot of you over the last two days have also worked on small projects, small snowballs maybe. And it's really incredible to me that some of them probably won't go anywhere, but probably some of them actually will. And you should continue the momentum of your projects, and maybe they can add up to a really big snowball, and that's really incredible to watch. The next thing I wanted to briefly talk about is this concept of 10,000 hours that was popularized by Malcolm Gladwell, I think. I actually am quite a big believer in it, and I think that to a very large extent, success comes from just repeated practice and just a huge amount of it, and you should be very willing to put in those 10,000 hours and just literally just count. Don't be too nervous about what am I working about, am I succeeding or failing, et cetera. Just do simple bean counting of how many hours you're doing, and everything adds up. Even the projects that I failed at and didn't snowball into anything, those add to my counter of number of hours I've spent developing my expertise and getting into an empowered state of being able to take on these projects with confidence and getting them to work. So a few examples of that. I made this really janky website a few weeks ago. This was a weekend project, and it's called awesomemovies.life. And you can visit it. I think it still works. I'm not 100% sure. I wouldn't recommend you go there. It's trying to be a movie recommendation engine, because I was trying to figure out what to watch on that Saturday, and then I was like, okay, I need to build myself a movie recommendation engine. So, I put this up, and one of the tweets that was a reply to mine was, wow, that's so cool that you got this to work in a weekend. And I was kind of reflecting on that at the time, because it wasn't as amazing to me. And the reason for that was that what this person is not seeing is that this is my 20th time making a website like this. So I see all the steps I was going to follow. I need Linode, I need a Flask server, I'm going to write some of this JavaScript style sheets, HTML, I'm going to spin this up together. I need to scrape all these web pages, I need to extract TF-IDF vectors, I need to train SVM. And all these things are things I've already done before 20 times. I already have code snippets lying around from previous projects. And I'm just remixing what I have, and I've already done before 20 times. I already have code snippets lying around from previous projects. And I'm just remixing what I have and I've already done all of this. And so remixing everything into a new form isn't actually that much work and allowed me to put this up over the weekend. And it's not that crazy. And this only comes from expertise. This only comes from having done it 20 times. And you can do this so confidently. The next example I would say in my life was a Tesla Autopilot. So I The next example I would say in my life was Tesla Autopilot. So I was hired to lead the computer vision team at Tesla Autopilot about seven or eight years ago. And one of the first things I did actually when I joined the team was I basically ended up rewriting the computer vision deep learning network training code base from scratch in PyTorch in some of the first few months that I entered the team. And I sort of rewrote the whole thing from scratch in PyTorch in some of the first few months that I entered the team. And I sort of rewrote the whole thing from scratch, and that ended up being a kernel of what it is now. And I think to some extent to some people that looked impressive at the time, but for me it wasn't because I was coming from my PhD and I spent five years doing stuff like that, and I knew exactly what needs to go into there. I need my training set, my evaluation sets, I need my training loop in PyTorch, I need my sort of configs, I need my log directories, I need to bring in a ResNet, I need to put in detection, we're doing regression, classification. And so the whole thing, like I'm anticipating all the steps, and that only comes from experience, that only comes from having done it 20 times before. And so I think this makes a huge difference. And things that look impressive are maybe much less impressive to you if you've done it 20 times before. And so I think this makes a huge difference. And things that look impressive are maybe much less impressive to you if you've done it 20 times before. So really try to get to this point where you have your 10,000 hours. It makes a huge difference. And just... Yeah, that's it. By the way, 10,000 hours, if you're doing six hours per day, I think this works out to about five years. So it's about a length of a PhD that you need to develop expertise in an area. So I think this works out to about five years. So it's about a length of a PhD that you need to develop expertise in an area. So I think it's roughly correct that that works out to about a PhD length. The other thing that I found is actually quite useful is to keep the dopamine flowing. Be aware of your psychology, your brain, how it works, and what it needs to keep going and how to keep inspired. And so in particular, your brain is a reward machine, and it wants rewards, and you need to give it rewards. So what is a good way to give it rewards? And in my practice, it is by doing projects and work on projects and continue publishing them. And so here I have a web page snippet of some of the projects I have worked on in the past. And these are hackathon projects and random projects, and not all of them are good. Some of them are not quite good, et cetera. But what I love about projects is a number of things. Number one, I love that projects get you to work on something end-to-end and depth-wise. Like, normally when you go to classes, you're learning in a breadth-wise fashion. You're learning a lot of stuff just in case you might need it in the future. Well, when you're working on a project, you know what you need, and you're learning it on demand, and you're just trying to get it to work. So I think it's a very different mode of learning that really complements the breadth-wise learning and is very important. So I 100% encourage people to work on projects. The other thing is putting them up is actually also like a really good Jedi mind trick in my experience. The reason for that is that if you're going to put something up, you're thinking about all the people who are going to be looking at it, your friends and teammates and family and future employers, et cetera. And so that really increases the bar for your own work. And it makes you work harder because they're going to be looking at it and you feel shame if it was crappy. And so you work much harder and you're going to go that extra mile to make it really good. And that really, really helps. And lastly, when other people are looking at your projects, you're going to get that reward because they like it, they appreciate it, they fork it, they work on top of it, and so that feels good to your brain. And so the way that this comes together is you are getting your dopamine, you feel good, that way you can build up to 10,000 hours of experience, and that's what helps you a lot, snowball your projects from a small snowball all the way to a really big one and actually make change in the world. So in summary, that's I think how it works like on a high level, and the message is just keep hacking. That's it. And then hopefully this is the future that we're going to build together when we snowball all of our stuff, or something like that, but not the first picture I showed, hopefully. And that's it. Thank you. Andre Kaparthi, everybody! Thank you, Andre. That was awesome. Thank you, thank you. All right. Let's get to those questions. So, we have a couple of questions. So, we have a question from the audience. So, we have a question from the audience. So, we have a question from the audience. So, we have a question from the audience. So, we have a question from the audience. So, we have a question from the audience. That was awesome. Thank you, thank you. All right, let's get to those pitches. The grand prize. Coming up, you're gonna hear eight pitches by eight projects, filtered through 290 submissions, narrowed down to eight, so y'all are gonna see some cool stuff. The grand prize is $25,000 investment, an actual term sheet from the Berkeley Skydeck Fund. They must commit to hacking all summer on their project, and they must appropriately form of course a legal entity. How do you get money otherwise? All right. I would like to now tell you briefly about how this is going to go. Eight projects as I said. Three minute pitch. You guys ready? go. Eight projects, as I said. Three-minute pitch. You guys ready? Three minutes? Yes, they're ready. The judges will then provide three minutes of feedback. And then after all the pitches, the grand judges will go and deliberate and pick a winner while we show you some other cool stuff. All right. I would like to introduce now to great applause everybody, please, because we have an incredible panel of judges. We are so pleased to have them. Please welcome our first judge, Brian Boardley with the Berkeley Skydeck Fund. Welcome, Brian. Marcy Vu with Graycroft. Welcome, Marcy. Nnamdi Iregbilem with Lightspeed. Welcome, Namdi. Irving Suh with Mayfield Fund. Welcome, Irving. Kurt Koitzer, UC Berkeley faculty and serial entrepreneur. Welcome, Kurt. And Mark Knitsberg, Berkeley faculty and director of the UC Berkeley Center for Human Compatible AI. Thank you, judges. All right, we got eight startups warming up backstage. Let's give them a little drum roll. Let's give them a little drum roll. We're going to get them going. I first have to hear if the slide's up. The slide is up first. Are you ready? Are you ready? Yes! Please give everybody a warm round of applause. They've been up all night hacking, and they're ready to share with you. Please welcome the first project, Revision. Come on out. Come on out, Revision. Oh, yes, the mic. That would be helpful. Yeah, thank you. So good evening, everyone. It's my pleasure here on behalf of my team also for the Revision project. And my name is Danika, I'm a rising senior studying computer science at UC Berkeley. We have masters of design students as well as data science students on our team and we're really excited to tell you about our project. So our project we're focusing on building an AI co-pilot tool for STEM textbook authors capable of detecting and mitigating bias in textbooks to create inclusive education content. And there is a reason why we're doing this. When considering overall representation of scientists across textbooks, only 13.1% were women and compared to 86.9% men in a 2020 study that features seven of the most frequently used biology textbooks within the US. And on average, people of color only appear every 320 pages of text, while white figures observed every 24 pages across 10 college STEM textbooks published between 2016 and 2020. So we thought about this problem deep and hard and it has been something that I've seen from my personal studies. Starting from elementary school to middle school, we constantly see different examples of word problems in other situations where text is always there. And it's not always reflective of the actual true history. And this research has been done by numerous scientists who have gone through this process of identifying people, creating databases, but there is just no current fix that, and no one is really hoping to create this problem, but there is no current fix that helps address this problem. So the textbook companies actually is who our team identified as our buying customer. The current revision process actually takes six to 12 months of a committee of five or more full-time employees working on bias checks. And the issue here is that employees are actually not experts on their topic. They also bring in their personal biases as well. So our tool would come in right in between the writing and revising part of this entire cycle that developers go through when writing textbooks. So again, here is our competitor analysis. I'm sure many of you have used Turnitin or Grammarly when you're submitting even essays. And we really think that there needs to be an additional check here for bias and checking gender, racial, political, and other biases and making this process affordable and automatic. So it's not a cost-leave process for anyone. And throughout this process, we're addressing supply chain diversity. So starting from a younger age, the elementary school students could be able to use textbooks that truly reflect the true history as well as themselves. And here is our prototype. So we have our text box here on the left side of the screen where you get to show in real time the examples of like some sort of text that a writer is creating at the moment. And on the right we have an overall score and the bias checks for different categories. And we're using machine learning models on the back end to actually identify as well as LLMs. And I'm not sure if I can play the prototype. But... Okay, yeah. It does play. So, essentially, you can click through the different links to see the breakdown. And once you actually highlight one of these, we are also adding in an API through a couple of the sponsors here, like such as Hume API and more to actually identify emotional analysis as well in the textbook writing. And in addition to this, we're hoping to build a chat bot that can actually help you also get bringing in databases from the unrecognized scientists and being able to sort of represent it because bias actually exists in three different ways. One of them is through actual like text such as representing firefighters would be nicer than saying fireman. And the other way is the entire tone and emotion analysis which is why our team used Hume API to actually detect that emotional component. And the third one is mitigating bias. So we also considered adding in the chat bot. So say for example if you want to highlight scientists that are like, for example, contributing to physics, you wouldn't just say, list a few male scientists and call it a day. We would also suggest equivalent contributions of female scientists as well. So please join me and our team in revisioning our future of education and work. Thank you. So I think everybody in communication today, nonprofit or profit, is concerned about diversity. So it seems like you have a much larger market than just textbook educators. Also a comment on market sizing and whatnot. I would think about potential ways you could expand the market here, because the number of people who are involved in writing textbooks is a relatively small group. But one way to think about it is maybe in this new era of AI-generated content, a much wider array of people can be part of this textbook generation process. That's one thing. And then I would also maybe consider selling directly to the consumers of textbooks, because in some sense, the bias you're talking about is internalized on that side of the equation, not on the manufacturer's side, and so there could be an incentive there for people who want to pay for something like this. Yeah, definitely. That's something we're considering. So the textbook would be our official bias that we're marketing to, but eventually it would be more of a grammarly checkered type of tool that anyone can use. Yeah, I had a similar comment on TAM and market opportunity and as you think about just how a textbook gets put into production that if you actually had it as a tool for whether it's news or other areas, you would have more velocity, both in terms of getting the data to improve your models, but also greater impact. I'll just flag as well. I think everyone here is hitting the theme of how do we think bigger. Even enterprises, like companies sending out communications internally or externally, I know this problem exists everywhere, so that's kind of where my brain would go to. Thank you. Thank you. Whoops. Agent OS. Please welcome Agent OS. Hey there, everyone. My name is Shashank. I'm here today with my friends, Agam and Dhruva Huja, somewhere in the crowd over here. We built today Agent OS. Picture this. You work at a hair salon, and you guys are bombarded every single day and every single year by your accounting and tax preparation qualms. These are things that are very hard to deal with, and you've heard of tools like OpenAI, ChatGPT, LLM this, ChatGPT that, everything, but you have no clue where to start using these technologies. And that's no fault of your own. The current state of the technology right now is very bad at multifunctionary tasks. More so, it's very hard as an individual developer, sometimes even non-technical, to even get started with even the simplest automations or workflows or tools with such LLMs. Even engineers with years on years of experience in this space take tens of hundreds of hours and even thousands and thousands of dollars to even get started to build something. This is where Agent OS completely transforms the landscape. With Agent OS, you're able to create multi-agent workflows in a matter of seconds from natural language. What does that even mean? Take your average corporate org structure. You have your managers, you have your workers, and sometimes you even have your interns. Everyone is really good at what they do. They have their tools, their skills. Let's say John is really good at charting and making PowerPoint skills. Let's say John is really good at charting and making PowerPoints. Let's say Steve is really good at Python coding. Everyone is really good at what they do. In this, you have a very collaborative working together to create this common solution for someone coming from higher up. That's how AgentOS was designed. Our engineers were able to replicate this human collaborative process programmatically using LLMs. What does this do? This allows everyone from the common Joe all the way up to enterprise clients to be able to interact and use these multi-agentic workflows in their day-to-day life to improve their quality of life or productivity. All in a matter of seconds and a few sentences. Let's go back to the case study of the hair salon. In the process of doing your taxes and accounting, you have multiple steps. You have your collection from your receipts and your invoices. You have calculating your cash flow, all the calculations you have to do. You have to manage your workers. And then you also have to do your general summary of what about your insights for the year, how you were spending, what you were spending on. And you have to also do a lot of clustering and analytics on this. This is a very complex workflow that's nearly impossible for modern day LLMs at the current state to do right now. You can take ChatGPT, you ask it a question for even more than three things, it'll forget what the first thing was by the time you're at the second. It doesn't work that way. With AgentOS, this completely changes where you're able to have these complex workflows. way. With Agent OS this completely changes where you're able to have these complex workflows. Let's dive into another demo. So let's say I'm an analyst at JPMorgan and my boss tells me every morning I want a report of XYZ stock in the morning, a detailed report on paper. How do I do that? I use Agent OS. On the screen you can see a bunch of other complex use cases of multiple agents working together collaboratively, but in the toolbar, in the search search bar you can see the use case of the analyst. Here I have to do market research, live stock data, I have to search the internet, go on Yahoo Finance, then I have to create my analysis, technical analysis, qualitative analysis. Then I have to do what my boss is telling me to do. And after all of that I have to create charts, graphs, and visualizations. Here, you can build tools using natural language like the one right there that says, write me a tool that fetches the meta stock price from Yahoo Finance. In a matter of seconds, the common Joe or anyone is able to create that tool, connect them to workers. You can think of workers as your everyday employees, agents, people that perform these actions using the tools, and then connect them to super teams. And these teams are able to, on the screen you see four, but you can scale this up to 40, 400, basically complex vertical and horizontal organizations that are able to perform complex decision making and complex analyses for anyone, from enterprise to consumer. What does this do? With the multi-agent, multi-team framework, this completely opens the landscape up for anyone and everyone to take on the power of LLMs into their own hands from natural language. Take your average farmer at a farmer's market. He's trying to create his marketing campaign for the upcoming farmer's market this Sunday. He has no clue where to start looking at his metrics, looking at the customers, looking at the weather, and creating these brochures, papers, pamphlets, and whatnot. With one line and one minute using Agent OS, he can create all the documentation he needs in order to enact this stuff and be able to perform successfully and continue and grow his business at his farmer's market. Things like this are completely opened up with Agent OS, and we hope to completely democratize the process of using LLMs at all scales, at all geographies, and all use cases within sentences and seconds. Thank you. That's a compelling proposition. The one thing that I worry about is right now the agents are the LLMs performing these tasks, and there's a certain question about the veracity and reliability of what they're doing. And so I think that in a future where we have that reliability, this would make perfect sense, but I would want to add a kind of tandem subject matter expert, maybe looking over the shoulder of each of the agents. I think next time I hear this pitch, I'd love to hear about the one market you're going to crush. It's hard for me to imagine serving a hairstylist one day and a Morgan Stanley analyst the next. This is a huge opportunity and a big, bold mission that you have. I would want to dig a bit deeper into your tech staff and the people you have on your team because these are really complex problems and issues. And also agree that what would be your first area of focus because it's pretty broad and wide. because it's pretty broad and wide. I'll say I kind of like the broad focus and there's a lot of individual startups tackling each of these individual problems. If it's invoicing or research, so it might be interesting to figure out how to like loop in all these other tools that are out there and really kind of just be like an interface layer and let these other companies solve the technical challenges. Yeah, I think the value proposition of creating multi-agent workflows in a matter of seconds is really compelling. I think the next step would be trying to figure out how can you go from simply performing these tasks to becoming the best at these tasks. So, for example, going after the outliers, sort of the thesis around coaching networks. Some startups do this, and they do it better for certain verticals than others. So I think doing more research around that could be really compelling. The only thing I would add is just think about enterprise security and how you solve for that. There's a lot of authentication and authorization you're going to have to do for all these agents, so just have an answer for that. Well, yeah. Thank you so much. Thank you, everyone. Thank you, Agent OS. All right, next up, Skyline. Come on out, Skyline. Hey, everyone. Hey, so my name is Rajan, and I'm a first-year student at the University of Waterloo, and I study software engineering. And I fundamentally believe that cities shouldn't be static. They should optimize and scale for the people who inhabit them. And so we built Skyline. Skyline is an optimizer, and it allows you to better understand how to model cities using agents and optimize things like traffic and transit to inherently increase mobility and reduce things like carbon emissions. So this is a very weird problem that we solved, but I want to walk through the case study of Los Angeles. So Los Angeles is one of the largest carbon emitters in North America. This is mostly because of their transit, because of the amount of cars. And so what are ways in which we can optimize this? Well, let's look directly at the people who inhabit Los Angeles. We can extract census data, things like age. We can look at things like gender. We can look at things like income. We can find population density graphs. And using this information, we can look at things like income, we can find population density graphs. And using this information we can start to find patterns. Specifically what we did is we created 500 distinct agents. Each agent is a different citizen with different interests and what we can do is we can give them each their own chain of thought. Each person here has their own day in their life. For example, this person is a very young, I believe this was a 22-year-old with a very large income. He has a long day at work and after work he goes to the gym. We can now reason about what this person may do and now model this on a map. Now once we have how these different agents are moving around, what we can do is we can try and optimize things like transit. So what we do here is we have our own proximal policy analyzer, and this allows us to create simulations on what we believe to be the best way to understand how we can move around from any point A and B in the fastest way at the lowest carbon cost. These are our own carbon cost analysis mechanisms, our own machine learning models to better understand how we may be emitting carbon and how to reduce this through our transit. So this is a lot that is through you and I think the best way for me to represent this to you is through a video. I hope this video loads. It's possible to play the video? So what we first do is we have an agent-based simulation. These are 500 distinct things in parallel that are running. Now they each go around throughout their day and what we can do is we can find patterns in how they move around. Now the best part is what we can do is now that they're all back in their original position, we can start a generation of transit. And we're using these patterns to now generate live, different transit systems that we believe to be the most optimal. So what Skyline is, we're not a company that does analysis of transit. We are a human modeling company, and that allows us to better understand and better predict how things around us will change and how we can optimize them using these patterns. Yeah, so that's Skyline. Happy to take any feedback. Wow. I just want to observe that what you're doing in creating a sort of digital twin of a city is essentially each citizen is being simulated using one of these really powerful, expensive things, a language model. And it will be probably an important step to draw from the language model some of the statistics that are actually fed in in the first place and make sure you're getting out something representative. That's very impressive. Similar comment, I think there's all sorts of economic theory about agents and modeling their behavior and their values and whatnot. The thing that usually gets you is the heterogeneity across and whatnot. And the thing that usually gets you is the sort of heterogeneity across the population. So making sure that that actually represents the populations being modeled is important. And then the other thing also related to value I would think about is just value capture for your own sake. I feel like this is like a category of software where like, yeah, the economic impact of this could be massive, but how much of that do you get to capture as a software renderer is like less clear to me, but it's very interesting. I guess I would be curious about maybe some more nuanced enterprise use cases as well, if it's concerts or security or stadiums. So just thinking about are there more micro use cases that there's a more direct ROI with for this sort of modeling? We try to consider ourselves to be a human modeling software and this is just one of the most visual applications which is Strenzy. Awesome. Thank you so much. Thank you. Thank you, Skyline. All right. Next up, we have... We have Spark. Please welcome Spark. Hi. How's everyone? How's Kali? We are Spark, and we're giving a voice to new entrepreneurs, young entrepreneurs. So, let's admit it. Cold calling is really hard. I mean, resources are hard to get. It's a steep learning curve, and getting attention is hard. If you've cold called someone, you know they don't have time. They'll say, oh, sorry, call me back later. I mean, they're busy. Everyone's busy. We have things to do, so we have to figure out how can we earn the time of working people. There's existing solutions. It's long and arduous for trial and error. It's expensive for a sales coach. And finally, if you have a sales partner, chemistry isn't easy if you're just meeting them, right? Well, we have a process. You upload a transcript to our software. We go through and analyze the emotion. We aggregate this data, and we give you productive feedback. Who's our target market? Well, look around. You guys are our target market. People who are engineers, people who love to build. And say this weekend you made some sort of product you want to sell. You don't have much experience with sales or outreach. With our software you can record your cold outreach and we can tell you what you've done right and how you can improve to hopefully land your product where it needs to be. And later on we want to expand to call centers and sales staffs because we think we can spread this across an organization and it can be highly profitable. We have usage-based tiering, so 75 cents a minute for 1,000 hours going up to 40 cents for 10,000. So this is our software. And I want to tell you guys a story. I started being an entrepreneur around six months ago, and we made an AI body cam analysis startup. So I did 100 phone calls, 100 cold calls. I got no clients. 200, 300, 500, and 700, no one was responding to me. So by 800, I got actually three, and I realized something. The human brain is pretty amazing. We're able to pick up on patterns, but the same time it's kind of inefficient because it took 800. Here we look at the emotion between every single sentence and we figure out spikes of emotion and decreasing emotions. We see that when we talk about security and data privacy with police officers it shows an increase in their interest. And this was a trend among many conversations we had. So in our analysis page, we see in the top left that mentioning AI accuracy and efficiency, increased officer safety, and discussing cost savings really helped us when we were talking to officers. Because we're some college students, right? We're dealing with some pretty confidential data. Bringing this up early really helped improve our rates. And the four things you see here in the corners are the different triggers we generated automatically based on the cold calls we had. So one is positive reactions, negative reactions, de-escalating tense situations, and normalizing exciting situations. We also generate insights too, based on whatever cold calling trends you make. We also have a RAG, so you can upload your company knowledge, your target audience, and your pricing information. So if you make a mistake, don't worry, we got your back. We'll tell you, hey, maybe instead of saying this, you could have said this because it might have helped you out a little bit. Sorry, my team picture isn't on here, but thank you to Tushar and Nick and Krishna. You guys were a great team, and I'm honored to be here representing you guys today. I'm open to feedback. Thank you. and I'm honored to be here representing you guys today. I'm open to feedback. I guess I need to be the first person to say that you're entering a pretty competitive market with other offerings here. Yeah. Yeah. I'll say something that stuck out to me was this idea of insights. But I think, you know, there's, at an organization, there might be a sales team and a marketing team and an online web team, and those teams don't really talk to each other. So maybe it's interesting to think about how do you pull insights from like one channel of sales or marketing and actually bring that into like another channel. So maybe the insights from cold calling are actually influencing what's going on the website maybe there's some interesting spots of opportunity there yeah I can actually talk about one facet of this we want to explore deeply I want to be an example say we have three founders in the company right I have a first cold call with one person and later on my second co-founder wants to set up a warmer call in the future, and then my third founder wants to set up a third call. We want to build a profile for this client as they go along so we can truly understand them. And also, we want to develop a profile on ourselves, too, so we can learn more about ourselves as we go and how we're behaving, make sure that we're learning as we go. So we're thinking, if we develop a CRM on top of this data that we leverage, then we can connect multiple teams and enable cross-functional benefit. Yeah, I had a similar comment. I think it would be really game-changing if, in addition to some of the real-time analyses you guys are doing around sentiment, where you can see the system with information on prior calls or this person's particular strengths and weaknesses and how they complement those of the other people on the team and to really build the CRM this knowledge graph around each person's strengths and weaknesses on the team to be able to better fine-tune the system yeah thank you you guys saw the analysis but also there's a long list of past conversations. You can actually go into every single conversation you've ever had and look at it deeply the same way you did in the latest conversation. I would think about the full sales funnel. This is pretty deep down in it. And as you think about where are you really going to be able to convert or where is the wedge that really matters? Because there's a lot that goes into converting a sale and it's not just the cold call. So is the issue are you actually calling the right people or is the issue like are you actually speaking to the right decision makers? So just thinking more broadly about that funnel and where you might actually be able to have the most impact and have the right wedge into the broader product suite. Yeah. Thank you. Thank you. All right. Thank you guys. We appreciate it. Bye. Enjoy your questions. Clicker. Clicker. Clicker. Thank you. Next up, we have Hear Me Out. Please welcome Hear Me Out. The big one. Hi, guys. Hi, my name is Marcus. I'm from Hear Me Out. And what we've built is an AI-driven customer service pipeline optimizing call matching and visibility. So that might leave you scratching your head. So let's just talk about the problem. So let me give you a bit of context. I'm an exchange student. And when I first came here, I had to deal with so many things. I had to deal with banking. I had to deal with insurance. I had to deal with deliveries. And I even had my car break down on me, and that was a real pain. In short, I was overwhelmed by the sheer number of customer service calls, because for each of these things, I had to make so many calls just to get things done. And I think a lot of you guys can relate to that. We've all had our fair share of bad call experiences where we're upset, the customer service representative is also upset, and nothing gets done. We've also had good experiences as well. And I think that's the core of what we're trying to tackle here. We want to create a pipeline that tries to provide optimal matching and provide visibility on emotional data to businesses. So we also did the research, and I think the numbers speak for itself. This is a key problem with a sizable market, and a sheer number of people are affected by this as well. And this is our problem statement, which is, how might businesses which offer customer service calls provide a better experience for their customers? So we think we can tackle this in four key components. First of all, an improved call bot. We all are common with that initial robo call that we have to deal with and sometimes it's really, really frustrating. How many times have you had a call about to talk to you and it just doesn't direct you to the right person? I think we've all experienced that before. Second and third, and I think this comes hand in hand, is just business visibility. We want to provide businesses with better visibility of both call experiences data as well as customer representatives bandwidth over the day as they continue to take calls. And finally, this is where we put those two together. We want to take that data and optimize a customer's journey through a better customer-to-service representative matching. So I won't bore you with this data, but with that in mind, we developed a set of decoupled microservices. And I just want to point three key parts out to you. So first of all, we want to assess customer agreeability with an initial robocall. But this won't just be your normal robocall. We want to use Hume's EVI to manage the robocall in an empathetic manner, such that it measures the customer's emotions as they go through the call, and eventually outputs an agreeability score for the customer. Second of all, we have a call analysis feedback loop and that's that whole thing on the right that goes down below. And what this does is once you have a call connected between a customer and a representative, it takes in multiple factors of data such as the call duration, the emotional change over the call, and the overall call outcome. Using Hume's emotional measurement API, we can then also generate a call score. Finally, and this is the third and key part to this, it's the matching API. Using the two things that I just mentioned, we can best match a customer to a customer service representative, which matches their vibe, their energy, and their emotions based on how our custom model is developed. So what's the outcome of all of this? As a representative goes through their day, their state changes depending on how their calls go, and their bandwidth adjusts accordingly. This affects the subsequent customers which they are matched to in a positive manner, and creates a better experience for both parties. So there's a lot more which we can build with what we have, but with this foundational pipeline, we believe we effectively tackle the problem that needs to be solved. That's all I have for today. Thank you. Nice. Thank you. Yeah, I mean, a little bit of feedback similar to the last company as well. Just there's a lot of companies working in this space too. So I would just continue to think through, you know, how to find that core differentiation if you continue to work on this after the hackathon. Yeah, I completely agree. I think a key part that we thought was really exciting was just what you can achieve with custom models. What we're doing by developing a feedback loop is we're creating something where we can create, in a sense, a model which trains itself. We can assess how calls might improve or get worse after the matching, and that feedback feedback loop is we're creating something where we can create, in a sense, a model which trains itself. We can assess how calls might improve or get worse after the matching, and that feedback gets fed straight to the matching API so that it knows whether or not it's done a good job or not. And we find that really interesting, and we think that that's a key differentiating factor which we can achieve. There might be some opportunities for building some sort of synthetic data pipeline here where you could just sort of simulate calls with an AI bot of some sort and use that as feedback. I don't know how good that data will be or not, but it could be interesting. Yeah, no, that's a really interesting thought. Thank you. I know right now you guys are targeting customer service agents as well as call centers. Something that could be interesting to think about is as you think about the different stages of the software adoption lifecycle as you go from your early adopters to your early majority and then your late majority who's eventually going to justify your evaluation in terms of like what those ideal customer profiles are going to look like down the line. Yeah, thank you for that. I think one key thing was like we actually had a judge come to us and talk to us about how they're doing something similar for sales representatives as well and we found that really interesting. So happy to figure out how we can pivot if that need arises. Thank you so much. Thank you. Thank you. Hear me out. All right. Next up we have Dispatch AI. Please welcome Dispatch. Hi, everyone. My name is Spike, and I'm with Dispatch AI. In the United States, over 80% of 911 call centers are critically understaffed. This means that in a crisis, people with real emergencies aren't able to get the support they need because all the human agents are busy, and they're often put on hold. This is particularly an issue in our neighboring city of Oakland, where last year the average wait time was over 60 seconds to pick up a 911 call. Now, in an emergency, every second counts, and this could be literally the difference between life and death. This is unacceptable, and that's why we built Dispatch AI, the world's first AI 911 call operator designed to eliminate these wait times. Our system is powered by two key components. First is the voice AI. The voice AI will step into calls when all human agents are busy, and it will work with the caller to evaluate their situation, extract the location, and optionally dispatch first responders directly to the scene. And the second part is our powerful dashboard for the operator themself. So the operator will have access to a bird's eye view of all of the ongoing calls, which will be automatically triaged by the AI into different forms of severity or priority. Further, they'll see that the AI will automatically extract the location and will provide a live transcript of the call so that they can quickly see what's going on and even step into the call once they're available. Further, they have buttons that allow them to directly, with just one click, because the location is already fetched, dispatch police, firefighters, or paramedics. All of this is done from a human-centric angle. And the way how we achieve this is by taking into account the caller's emotions. So, for instance, when a caller shows signs of anxiety or fear, the system could work more to calm them down and make them feel at ease before taking the next safe step. This system is fully designed with ethical safe guides in mind. And part of that was fine tuning a model on over 500 911 calls so that it could understand the proper protocols and be knowledgeable have a wide or have access or be knowledgeable on a wide variety of possible scenarios in which a 911 operator could assist in, including fake calls or instances where it may not need assistance. This is all powered by our innovative tech stack that utilizes a variety of AI components, including the voice AI, the emotional analysis, and of course, a key component of this, the fine tuning itself. Our mission is to make requesting emergency services more effective and efficient. Thank you. I'll go first. I thought you did a great job. I thought you presented the problem set, the opportunity, and the product very clearly. You only had three minutes, but you hit all the relevant points. Thank you. that you hit all the relevant points. Thank you. The one thing I would encourage you to think about a little bit is sort of like the optimization function for these municipalities, right? Because if people in Oakland are waiting 60 seconds to get their 911 call answered, like there's a reason for that. I don't know what that is, but somehow these municipalities have decided that that's how they want it to be. And so I would just think about like, you know, as you bring in AI to this problem, doing the like potentially difficult A-B test of making sure that whatever it is that these municipalities are actually optimizing for is actually improved by this. Because it seems like a no-brainer when you first say it, but like, clearly it's this way for some reason that is probably nuanced and tricky. So just something to think about. Any other feedback? I don't think so. Just following up on that, I think the key is ease of adoption. I mean, I think it's going to be easy to make a productivity argument to the city of Oakland, but then you've got to think about who's actually installing, who's paying for this and who's installing it. Okay, that's good. Thank you so much. Thank you. We dispatch. All right. And next up we have ASL Bridgeify. Please welcome ASL Bridgeify. Please welcome ASL Bridgeify. Hello, my name is Isha and today I'll be presenting ASL Bridgeify, the next generation of sign language in interactive learning. Oh, this? Okay, sorry. So what was the inspiration behind this? Well, ASL is the third most studied language after Spanish and English, and over a billion people are projected to have some sort of hearing loss deficiency, which is why it's even more important to have a seamless way for people with hearing loss deficiencies to communicate with people without them and vice versa. And next, there's over 15,000% return on investment over a 10-year period, demonstrating the value proposition. And existing platforms like Duolingo surprisingly do not take into account ASL learning, which is why it's important to build an interactive platform where individuals can retrieve the accuracy of their signed texts as well as characters. Now, our solution includes three proprietary AI models. First, we use the random forest algorithm in order to map input pose estimation frames of frame length of 200 to the predicted alphabet from A to Z. Next, we also use an LSTM model which captures sequential dependencies to map from hand post coordinates to the actual word. And then we also have our individualized RAG calling in as well as PDFs that are specific to ASL learning that get chunked and transformed in a vector dimension space. Now, as you can see here, this is a hand-posed estimation extraction using the media pipe library. So you can see A, B, and C. And here's our platform where you can, there are different modules to learn alphabets, signs as well as sentences and we even have a realtime ASL practice so in realtime to capture the sign that you are actually, the letter that you are actually signing and give you the accuracy for that. So here's an example of us using the media pipe library to actually extract all of the hand key points. And here are some videos where there are over hundreds of words that you can actually view to learn each of the hand signing frames. Now this is our proprietary RAG. And the way we've trained this is we've collected a variety of PDFs that are essentially manuals for ASL learning and potentially in the future we would want to incorporate things like YouTube transcriptions that can actually be transformed and embedded within our vector dimension model. Now in the future, this doesn't, ASL doesn't just, hand pose estimation doesn't just have to be localized to ASL. There are plenty of other opportunities for human pose estimation, including fields like dance, martial arts, where you can not only identify certain techniques but you can also get feedback generations from certain input frames. And in the future this could also be integrated into existing solutions such as Zoom, Loom, and FaceTime video, so if there's, so given the signing of a certain sentence transcript, you can get in real time the actual predicted sentences and words. Okay. That's nice work for 36 hours. I would be... I spent some time creating assistive technologies for the blind, and I would be just very aware of the market and how you'll approach it and who will be paying. I think that will be a good thing to pay attention to. Thank you. Yeah, as you think about the market, I feel like these language learning apps are tricky to kind of scale to meaningful businesses. You know, there's sort of like Rosetta Stone, whatever, 20 years ago, and then there's been like Duolingo in this most recent gen, but there aren't like that many that get to meaningful scale. So might be worth just thinking about that market and what are the kind of success drivers. I think even as I mentioned previously, apart from just hand pose estimation, I think that there's a big market for body pose estimation. I think especially in things like combat training, especially if you look at the military, even dance performance companies where they have to train dancers and they're actually specific techniques in which they want ground truth feedbacks for, I think those are also potential markets that could be ready to penetrate into. You chose more traditional machine learning algorithms and early neural nets like LSTM, and that may be the right answer. That's not obvious to me, but I think you would, for today's audience, we need to explain why you're not using more contemporary genii algorithms yeah so initially i we were actually thinking about using more and encoder encoder based transformer models but um we also we ran into some struggles so we just ended up we ended up settling on the lstm on the lst, but in the future, we would obviously adapt more of the state of the art transformers, and even in the case for feedback generation for given hand poses, that could be an easy encoder to decoder multi self-attention model that you could train. Okay, thank you so much. Thank you. Thank you. All right, our last contestant for the grand prize is Greenwise. Please welcome Greenwise. Please. When I was 14, I stopped eating all meat. I lasted about two months. Now, even though I still eat meat, there are a lot of small changes you can make that have a huge impact on your carbon footprint. For example, by switching from beef to chicken, you cut the carbon footprint of your meals by six times. What we do is we help you make that switch from beef to chicken for everything, for your shoes, your shirt, household supplies, food, everything has a carbon cost and a carbon footprint that we can mitigate. So how does a consumer analyze all their purchases and the carbon footprint of anything and try to make all these very difficult decisions and research about how they should change their actions? Well, this is where GreenWise comes in. Well, this is where GreenWise comes in. We seamlessly integrate with existing customer purchase models to basically input what the consumer is already doing, for example, through receipts or through emails, and we integrate with Apple Pay, with Amazon, and with Square to automatically get their purchases into our system. From there, we vectorize their purchase and compare it to our vector database. This database has all the carbon footprints of over 10,000 products that we've analyzed and made sure that these are accurate carbon estimates. Additionally, by using the vector embedding, we make sure that these similarity scores are very accurate. It's not an LLM that can hallucinate. These are real accuracy scores and real carbon predictions. From there, it directly can tell them an alternative product that is very similar but has a less carbon footprint. Additionally, this presents a lot of room for scaling. When other businesses want to analyze their carbon footprint for their products or for events and other bigger venues. So, from good intentions to reality. Let's make it happen. It's a very innovative rag use case. I would have never thought of that. We're not using rag. Oh, it's not? It's similar in that it uses a vector embedding for finding similarity, but the similarity is directly the output. Yes that's right. Yeah. Is this a subscription product? So you would imagine it being a subscription product? We would imagine. Probably not. We ideally we'd integrate with existing businesses like Instacart or Safeway so that they can show uh our results on how green or how uh the carbon footprint of certain products is on their uh app. Um but it also works for consumers to use on their own as demonstrated here. People wouldn't pay for a subscription though. Okay, I think that's all the comments. Thank you so much. GreenWise, thank you. the comments. Thank you so much. GreenWise, thank you. All right, I would now like to invite our esteemed judges to convene in the secret room where judges make their decisions. And we are going to have the special prizes. So as I mentioned, a bunch of sponsors came to make this all happen. We're an educational program, and it is entirely the support of these sponsors. And they're not just providing support, they got cool prizes! So let's bring them on. In just a minute you're going to hear from each one. These are the sponsors for today. And I also want to thank our community sponsors. These are startups, very cool startups who hung out and helped our young hackers with their needs and their cool tools. All right. So our very first special prize is going to be announced by a very special campus partner. I'd like to welcome the academic innovation catalyst. I'd like to welcome out here Matt Cincini and Paul Wuerck to tell you about AIC, one of our newest campus partners doing very cool stuff. Please give them a welcome. Thank you. Thank you so much, Caroline. It's just a thrill to be here. So my partner and I, Paul and I created Academic Innovation Catalyst, or AIC, to release more innovation from academic research for positive social impact. And we're focused initially on the areas of climate tech and AI for good, which is why we're here today. How do we do this? Well, we make proof of concept grants, so no strings attached, non-dilutive grants to academics with an idea, and then we help them take that idea, carry it through commercialization to scale and sustainability. And so that's what we do. We're thrilled to be here today, and we'll be making two awards to the most compelling business plans or innovations involving the use of AI to make the world a better place. And we couldn't be more excited to announce them in five seconds here. I'll just say that we met with many amazing teams. It's been an extraordinary weekend. Thank you so much for including us. We had to narrow it down to two. It was tough, but I think you'll see that they're well-deserving. So with that, let me hand it to my partner, Paul Work, to announce the winners of the AIC AI for Good Awards in 19, I'm sorry, 2024. This is what happens when you get old people up here on stage. So at any rate, we are really thrilled to be here, as Matt said, and we're especially thrilled with the fact that so many of you are putting your talents to work for such great causes and for the betterment of humanity. And AI has so much potential in so many realms, but among the most important is to make the world a better place and to make a social impact. make the world a better place and to make a social impact. And so with that, we're thrilled to announce the first two winners, Dispatch AI and ASL Bridgeify. So these are tremendous companies. Again, the competition was so strong. May I ask, actually, both sets of winners to stand in the audience here? And thank you again so much for the for the terrific work. I think as you heard ASL Bridgeify is doing for sign language what Duolingo has done for learning other languages and it is so important. It's an incredible and shocking that it's an underserved and currently not served market, and their technologies are going to change that. And dispatch AI, what can you say? I mean, it's such an important issue to be able to get emergency response, to be able to get a response when you need it. And, of course, the reality is, when we have, unfortunately, too many mass catastrophes, the time when you need the response is that you're often most short-staffed. And so dispatch AI is using artificial intelligence and a variety of technologies to speed that process up and to help both the dispatchers and the people that the dispatchers are helping. And so can I ask the dispatch AI team to stand up as well and be recognized? It's a great job. Congratulations to all of you and to everyone who is here today. Thank you so much. Thank you, Matt and Paul. Thank you, Academic Innovation Catalyst. Our next special prize is going to be introduced by our very own general manager at Skydeck, Sybil Chen. Give her a welcome. Hello, everyone. Hope everyone has had a great weekend. At Skydeck, about a year and a half ago, we launched the Skydeck Climate Tech Track, in part thanks to a nice grant from the university, $150,000 to build out the track. And right away, we started putting that to work. We grew our advisor network from maybe like five advisors in climate tech to now over 30 advisors that are in the climate tech space. And beyond that, prior to the grant, we had maybe three or five startups every batch that were in climate tech. And now we average 15 startups per batch in the climate tech space. And we really hope to see that grow. So I'm very pleased to announce that the winner of the SCADIC Climate Tech Track is Team GreenWise. I think they're still in the green room because they just pitched. They were the last ones to go on stage. But they really kind of represent the team members that we like to see at early stage startups. It's three team members that are best friends from middle school. Oh, they're all here on stage. Come on out. I wasn't expecting that. But Anthony, Ben and Ethan, three best friends from middle school representing UC Davis, UC Santa Cruz and UC Barbara, UC Santa Barbara. And they've built a platform for carbon footprint tracking with actionable recommendations for vendors so that people and companies can reduce their overall carbon footprint. So please help me in congratulating this team. Winners of $25,000. All right. Thank you so much. Thank you, Green. Clicker. Clicker. Thank you. All right. Next up, special prizes from Intel. Intel, come on out. Intel was here. Their table was swamped. I'd like to introduce Gabrielle Amaranto. Hi, everyone. Thank you all so much, and thank you to the organizers for having us. We've had such a great weekend, and your projects are so amazing, so thank you to everyone who joined our track. As you can see, the winners behind me. Congrats to all the winners. We have our raffle winner, Isla Arres. Third place is Asel. Second place is Batteries by LLM. And first place is Dispatch AI. So let's give them a round of applause. Yes, great job, amazing projects. If you won, please meet us outside. I wanna hand you your prizes. We have them with us. So please meet us outside so we can take pictures and hand you your prizes. We have them with us. So please meet us outside so we can take pictures and give you your prizes. Thank you. Thank you, Intel. All right, next up, AWS. Come on stage, AWS. We've got Rohit Tauri, Kevin Liu, and Brandon Middleton. And that's what they're going to tell you. Go ahead, Rohit. and that's what they're going to tell you. Go ahead, Raheet. Howdy there. Can you all hear me? Yes? Awesome. Well, hey, thank you so much, Skydeck team, for having us and CalHacks. This has been an extremely impressively run operation, and we're really excited to be partners and sponsors of this hackathon. Today we have three different prizes. Actually, let me introduce myself first. We have Brandon, Kevin, and Rohit. We are members of the generative AI organization at AWS. We work with top generative AI startups like Anthropix, Stability, Perplexity, and others in the development of their large language models, as well as our overall kind of inorganic and organic growth strategy, including investments as well. Today we have three different prizes. We have four of the teams that we have chosen to give the prizes out to. Our first place prize is for $10,000 in AWS credits, and we have two other prizes, one for climate tech and then one for responsible AI, which are 5,000. I did want to say that we talked to so many impressive startups and founding teams today and hackathon teams today. I wish we could give prizes to all of them. We did want to recommend that those who we spoke with, and I think we had these conversations with you already, to go ahead and apply for the AWS Generative AI Accelerator, as well as our AWS Activate program to receive additional credits. I'll go first. I'm going to be announcing the Climate Tech. We're going to give the prize out to Disaster Aid. Is Disaster Aid in the room today? Yes. Good job, guys. Kevin? And then for Responsible AI, we have a two-way tie, so we're splitting that pride into 2.5K for each team in credits, and that's GPT Ethics and DP Ancestry. They're in the hall. All right, and I'll round us out. Our grand prize, kind of the most impressive thing that we heard and saw today, is going to go to Safeguard. So Safeguard team, if you're in the building, stand up real fast. Let's give you a round of applause. I don't see them, but God bless you and keep doing what you're doing. Thanks so much, guys. Thank you. Thank you, Intel. Our next amazing partner, Reach Capital. Please come out, Tony Wan. Oh, out of order. Let me see if I can find you. There you are. Okay. Mike. Awesome. Thank you so much. Thank you to CalHacks. Thank you to Skydeck. It is such a delight and pleasure to be with all of you today. And thank you to everyone for being here from across the country, from across the world. My name is Tony, and I'm from Reach Capital. And let's just cut to the chase, because there's no drama here. We want to congratulate Frodo for winning our AI and Education Prize. Frodo, Aman, Kush, and the team, if you are here, please stand up. Please stand up. All right, you are right up front. You are in the right place. Thank you so much. You've won the one ring, as they say, or at least our $1,000 cash prize. So please, let's meet up afterwards. Rich Capital, we're an early-stage edtech VC firm investing in edtech. We invest in education across K-12, higher ed, and workforce in tools that expand access to education and accountability. And many of the companies in the portfolio were founded by students themselves. You know, like what better place to find great ideas and great talent than to go to places like this where students are living that experience. So if you're building a venture in ed tech, please reach out. Thank you so much. Thank you, Tony. Next up we have you.com. We have Rex. Come on out, Rex, and tell us about the prize. Applause, please, for our sponsors. Thank you. Thank you so much. Yeah, so we wanted to announce I'm Rex. We're from you.com. This is Oliver. As you know, you.com brings transparency and web access to our LLMs to make sure that they are as accurate as possible. So we wanted to give an award for the best use of U.com's APIs to Transparify. So congratulations. If you guys want to stand up. If you're here. There you guys are. Yeah. Thank you so much. Transparify did an incredible job by live streaming videos, fact checking them as they went, using sources from the web, and u.com search APIs. It was really incredible and powerful. And Oliver will talk about our custom assistant. OLIVER CHIEN PANZERALEKKE- Yeah, so for our best custom assistant, we'd like to give that to events-.ai with Oliver and Devesh. So Oliver and Devesh, can you please stand up if you're in the room? Congrats. Over there. Yeah, so we were particularly impressed by what they've built. Essentially, they handle booking, searching, and even talking with customer agents on the phone. And they used u.com in a way to actually find these events. So we were incredibly impressed by them and can't wait to see what they do in the future. Yeah, come find us after, and we will give you your awards. Thank you, Hume. Or you, thank you. All right, I think we're going back to Hume now with Zach Greathouse. Welcome, Hume. Nice round of applause, please. Hi. So first, just a huge thanks to Skydeck and CalHacks for organizing this event and inviting us back. And to all the staff for running such a memorable event. So I'm going to be announcing our three categories for our track. We have our best AI, our best empathic AI for social good, best empathic AI for just most innovative empathic AI, and then just the best overall. As you can see, the team's here. We've chosen Scam Scanner. Can Scam Scanner, are you here? Can you stand up? All right, big, absurd applause. For most innovative, we have Bloom Buddy. Where's Bloom Buddy? Can you stand up? Yeah, okay, great job, you guys. Talking to plants. And then best use of Empathic AI overall, we chose Lock-In. It's a personalized learning assistant. Are you in the room? Where are you? Yeah, there we are. Okay, congratulations, you guys. Come meet us after outside. We'd love to chat, take pictures. And thank you so much. Thank you to all the participants. Yeah, we'll maybe see you next year. So take care. All right, thanks, Hume. All right, and our last special prize is Grok. There they are. Please welcome Jose Menendez. Hey, everyone. Very nice to be here. For those of you who haven't heard about Grok, grok.com experienced the fastest influence on Earth, period. All I have to say about Grok right now. But our special Grok star award today goes to ScamScanner. Where are you guys? So these guys have a product that I want my mom to use today, right? Monitor your call for potential scams. Who doesn't want that for your mom, your uncles, and the whole thing? Now they get 1,000 credits on GrokCloud, which is many, many millions of tokens. There's two special mentions. I have to read so I don't screw up. 3brown1blue, where are you guys? Another awesome solution. These guys are generating on the fly incredible videos for math. Something that I would use right now as well. And Transvaryfy, are you guys around here? You, you've been mentioned as well. Transvaryfy, very cool. Who doesn't want to hear a podcast with instant fact checking, right? Am I right? Now, my special surprise for the day, I want to make a very special mention of Nathan Buck. Are you around? Nathan! All right. Nathan didn't use Rock, so I'm going to give a special Technical Excellence Award to Nathan for a model he trained and ran on his CPU for doing very interesting DOM operations corrections on the fly for front-end. Not only that, Nathan is invited officially to present his work in the Grok HQ to present his work in the Grok HQ as soon as he can. That's it, guys. I'm very impressed with all the work we saw. Thank you very much, congratulations. Thank you, Grok. All right. Our esteemed judges are back with their determination. Please come back, judges. Come back so we can all enjoy the grand prize are you guys ready do you have a guess is there a voting do we have voting tally taking bets everybody I want you to guess the top your top two choices for grand prize and I'm then I'm gonna ask who got it right. Okay, so as our wonderful judges take their seat. All right, we got some shout outs going here. Any other shout outs? Okay. All right, this audience is into it. So as a reminder, the grand prize is a $25,000 investment from the Berkeley Skydeck Fund. Also, a golden ticket to our PAD 13 program at Skydeck. And a special prize. We are happy to announce that OpenAI is providing $2,500 in credits for this winner. So I think we're ready for the drumroll. Take your guesses. Only the judges know. I don't know. We're all about to find out. It's Dispatch AI! Dispatch, where are you? Come on up. There's stairs right there. Come on. Come to the front stage. There you go. Thank you, judges. I want to invite, while we invite Dispatch up, I want to thank all of you for coming. I want to invite Spike from Dispatch. Okay, here's the team. There we go. Dispatch AI, grand prize winners. Well done. Well done. I'd like to invite the Skydeck staff to come out and the Berkeley Hackathon staff to come out. Come on out. They've been working all weekend. I think some of them did not sleep at all. Please give everyone who joined to make this a huge round of applause. Thank you, everybody. Thanks for joining us. We will see you next year.
Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony
5,547
Berkeley SkyDeck
20240703
At the 2024 UC Berkeley AI Hackathon's Awards Ceremony, the atmosphere was electric as Andrej Karpathy, founding member of OpenAI, delivered an inspiring keynote. Out of 371 projects, the top 8 teams took the stage to pitch their groundbreaking AI solutions. After intense deliberation by our esteemed judges, the big reveal came: up to $100K in prizes were awarded, celebrating innovation and creativity in AI for Good. Missed the live ceremony? Relive the excitement and watch the future of AI unfold! For more SkyDeck news, connect with us on ► LinkedIn: https://www.linkedin.com/company/skydeck-berkeley/ ► Instagram: https://www.instagram.com/berkeley_skydeck/ ► Twitter: https://twitter.com/SkyDeck_Cal Chapters: 0:00 Welcome 0:19 Caroline Winnett 4:05 Andrej Karpathy Keynote Speech 22:20 Pitch Overview 23:29 Judge Introductions 24:43 Revision 31:23 Agent.OS 38:54 Skyline 44:32 Spark 51:35 HearMeOut 57:05 Dispatch.Ai 1:02:04 ASL Bridgify 1:08:57 Greenwise 1:13:35 Special Prize 1 1:17:24 Special Prize 2 1:19:30 Special Prize 3 1:20:45 Special Prize 4 1:23:15 Special Prize 5 1:24:27 Special Prize 6 1:26:00 Special Prize 7 1:27:24 Special Prize 8 1:30:10 Grand Prize Winner #AIHackathon #UCBerkeleyAIHackathon #BerkeleyAIHackathon #Innovation #TechForGood #BerkeleySkyDeck #AI #LLM #AIforGood #HackingForGood #UCBerkeley #Startups #awardsceremony #Hackathon #TechInnovation #AndrejKarpathy
2024-07-09T15:54:20.911764
https://youtu.be/y-FfDQJgo_8?si=YNoYnedHT7XUR1hO
Hi, good morning everyone. Thank you for joining us today in the sixth webinar in the ASIL webinar series. My name is Josephine Lam Bong and I'm the Senior Manager of Science and Industry Affairs at ARM. Today's webinar is on the topic of control strategy and will be delivered by Nolan Poulsen, who is the lead contributor of Chapter 10 of ASIL. So for those who are not familiar, ARM is the leading international advocacy organization in the benefits of engineered cell therapies and genetic medicines for patients, healthcare systems and society. As a community, ARM builds the future of medicine by convening the sector, facilitating influential exchanges on policies and practices and advancing the narrative of data and analytics. We actively engage key stakeholders to enable the development of advanced therapies and to modernize healthcare systems so that patients benefit from durable and potentially curative treatments. As the global voice of the sector, we represent more than 475 members globally, including emerging and established biotechnology companies, academic and medical research institutions, as well as patient organizations. As I'm sure everyone here knows that manufacturing for cell and gene therapy is a complex process and it's often a rate-limiting step in the commercialization of regenerative medicines. The industry overall is in the process of rapidly scaling up, and there are more phase three studies and more commercial product approvals that are anticipated. However, products are being held up at the VLA stage due to CMC issues. We see that advances are being made in the manufacturing technologies, and the experience in the workforce are overall improving. However, standards and best practices in the field are not advancing as quickly. So this is the pre-competitive area where ARM focuses our efforts in to further advance the process understanding in this industry to facilitate product development. ACE-CEL is a project that ARM undertook several years ago after feedback from our members about the CNC issues that the field as a whole is experiencing. It is a best practices document which incorporates collaborative case studies presenting the implementation of quality by design and cell-based therapies. And to make A-Cell an effective resource and reflective of the ongoing innovation, we chose to focus on immune cell therapy as the case study, and in particular, we further focused on autologous CAR-T cell therapy, given the recent approvals and the number of products that are available to the patients currently. The ASO effort follows the success of AGEEN, the sister project of ASO, which focuses on AAV vectors, and AMAB, which was instrumental in allowing advances to occur in the monoclonal antibody industry. We're very fortunate at ARM to have a large number of subject matter experts from our member companies. There are a lot of contributors for this project from more than 30 member companies, and this project is also supported by Nimble, who generously provided funding for ASIL. The document is intended to benefit the entire field, so it is available as open access in our website at the link provided here. This is what the contents of the documents look like. As you can see, it covers quite a big scope from regulatory considerations, comparability standards, all the way to facility design. Today's talk will cover the elements of control strategy that are laid out in Chapter 10. The webinar today is the sixth out of the nine-part webinar series for the ASEL educational rollout. The schedules for the webinar are provided in this link down here. Our hope is that ASEL, both the document and the webinar series, can serve as an educational tool to the community and we hope that the document can be used by product developers, educators at universities, and we also have the aspirational goal that this will be used to help educate the regulators. And before we move on, just one housekeeping item. You can use the Q&A function to ask the questions and we'll try to get to as many of them as we can. If we didn't get to your questions, we will compile them and post a response in our website. And with that, I'm sure you're not here to hear me talk, so I would like to introduce our expert speaker for today's seminar. Dr. Nolan Poulsen is the Vice President of Strategic Product Quality at GlaxoSmithKline, where he and his team oversee global end-to-end product quality of cell therapy, biologics, and small molecule programs during clinical development and commercialization. He is an experienced professional in providing program CNC leadership, product quality, and technical oversight during CNC development in support of commercialization efforts for large molecule and cell therapy programs to global markets. Nolan was instrumental in the successful commercialization of two cell therapy programs, Brianzi and Abekma, that were approved by the FDA in 2021. and Vecma that were approved by the FDA in 2021. He previously worked in the industry with increasing levels of responsibility within R&D, quality control, formulations, global product quality, and CMC leadership at Amgen, Janssen, Juno Therapeutics, Celgene, and BMS. He has a keen understanding of process development and considers quality as a strategic partner with the technical teams to drive holistic control strategies to ensure product quality and patient safety. So Nolan, thank you again for taking the time out of your busy day to be here today and I will turn it over to you to start your presentation. Thanks Josephine, I appreciate that. So very kind introduction. So let me see if I can get my able to share. We'll get this thing going. This will be good. So let me know if you guys can see this. OK. OK. Now, so thanks for the introduction, Josephine. This is a pleasure to be here. It's it to be involved in the A-Cell program. Certainly, you know, when I first started my career, looking back at the AMAP case studies in the biopharm space, it was amazing to kind of see what was done there and how much we were able to leverage that across the industry. And so I'm hoping that the contributions that were made, not just for me, but I'd say from the body of individuals and support team that I worked with at Juno Therapeutics, Celgene and BMS, and certainly different aspects here at GSK, contributed to all of this. So, you know, a big shout out and a thank you to all those individuals that were there, part of the team to support this. So wonderful, wonderful time. I wanted to start off, and this is just, for me, it gets very personal, right? Why is this matter to me? And this is really what's been foundational to me. I've been able to meet Emily Whitehead. This is a picture of Emily Whitehead on the left. She was the first pediatric patient diagnosed with ALL. whitehead on the left. She was the first pediatric patient diagnosed with ALL. And she was first dosed with Kimraya. And so the first cell therapy pediatric patient, and obviously, it's been amazing to me to see the progress and the life-saving capabilities of these cell therapy programs. Emily's the same age. I have a daughter named Hadley that's the same age. And I can't imagine going to the doctor when she was age five, wondering if she would ever make it home. And that's what the Whitehead family had to go through. And so I just, I can't help but feel inspired by that experience and what was done there by the doctors at UPenn. On the flip side, and this is one on the other side, this is a picture of my father on the right at age 78 was diagnosed with diffuse large basal lymphoma. And so the wonderful thing from my perspective is that he's since gone through chemotherapy. He's in remission right now, which is good. Based on his age and based on the cancer type, it's comforting to know that all the hard work, blood, sweat, and tears that went into supporting the commercialization of Branzi at BMS has the ability in the future to potentially dose and support and save the life of my parents. So to me, it really comes home to say this really matters, right? And the power and the concepts behind cell therapy is huge and it makes a big difference. And I just can't imagine a world without this life-saving medicine and the potential benefits that it has, not only for the two individuals here, but for countless number of individuals across the world. So certainly there's many challenges and certainly many opportunities in the cell therapy space. One is, you know, you're dealing with life cells. And so there are many, many different attributes to be characterized. If you look at the acceleration and advancement in the field of cell therapy, it's rapidly evolving. There's new technology being designed and implemented all the time, new processes, new analytics. And so we're always continuously learning about what makes the most sense, what is most important for patient safety, product quality, and certainly what is the most important from a regulatory agency perspective. What are those weak signals or hot buttons that we need to worry about? The one unique thing that's certainly a challenge and an opportunity is when you look at autologous therapy, and this is not necessarily to quite the extent in the allogeneic, but it's certainly principles apply, you have single patient to single clinical outcome. So you can really start looking at patient centric specifications, which have been a buzzword across the industry. This is truly the epitome in the autologous therapy space where you can directly correlate one clinical outcome based on a CQA profile from the actual drug product, a CQA profile from the actual drug product, which is huge, very powerful. And as you accumulate all of that data, looking at that clinical correlation of those CQA profiles, you can have a direct correlation and a direct understanding of does this particular CQA matter to the patient outcome and the benefit and have those direct conversations with regulatory bodies to understand is this a fluke? Is this statistically significant? And does this matter to that patient population? So it's certainly a very unique, and we'll get into that a little bit later too. And then certainly one of the challenges obviously is the source material, and I'll talk through this. There's a huge variability component based on the input material as well as how it's being managed, how it's being transported, stored, all the rest of that. So most of you, if you're on this call, you've probably seen the general CAR T cell manufacturing process, you know, starting with apheresis, going through transduction and reinfusion into the patients. And when you start to looking at it from the frame of the controls, you know, what other controls do you have? Obviously, you have drug product type, you know, different specifications, whether it be for the input apheresis material, your intermediate specs, your viral vector specs, your drug product specification. You have all sorts of in process controls critical process parameters that trying to manage the manufacturing process certainly there's it's embedded across the board to make sure you have a very thorough extensively characterized process to understand what's going on and then there's obviously the raw material and gmp controls and then certainly there's periodic controls as it relates to validation, comparability, and your product reviews and stability. So as we go through the control elements discussions today, we'll probably touch on quite a few of these as we go through this. If you look at the evolution of a control strategy, really it's your holistic integrated control strategy is created from smaller scale control strategies, basically process parameters, material attributes, procedure controls, and analytical testing. And so all of that underlays by your process understanding and your product understanding as it relates back to your QTPP, your potential CQAs, and through the course of characterization, you'll truly understand what's most important. through the course of characterization, you'll truly understand what's most important. How do you refine and adjust your process studies, your analytical capabilities? What's the most important from a material attribute perspective? And then certainly there's an overlying risk across the board. And I think that the most critical piece, in my opinion, is making sure that there's that feedback loop from the clinical output to make sure that you're constantly reviving and understandingments and opportunities for improvement so that we can, by the time we commercialize any product, have the most highest likelihood of success and the best product that you can have being presented. Certainly, there's a level of risk. And just like what Josephine had mentioned mentioned there's many other chapters that go into many different topics so I'll just preface this there's lots of strategies that are that will be discussed today that there are many different and more extensive documentation on in the earlier chapters compared to the chapter 10 control strategy discussion but in the earlier chapters compared to the chapter 10 control strategy discussion. But if you look at the control strategy, obviously the key is to make sure that we have a risk-based approach for the control strategy definition. You're really trying to understand the science. You're really trying to understand what makes the most important. You're trying to minimize variability and really trying to make sure that you have a clear understanding of what your set of controls, what do you need to have to make sure you have product and process understanding to ensure product performance and product quality, and making sure that you have a consistent process that continually delivers efficacious and safe product to the patients every single time. And so every single component that we're looking at from a product quality control strategy has to consider risk. And there are many different aspects of risk and how you're documenting that. And we'll go through some of those here as well. So obviously, the real question is, during a manufacturing process, what are the specific controls? What are you trying to look for? And how do you establish a suitable control strategy? And really understanding the correlation between your process and your CQA profiles is essential to make sure that you have a well-characterized process and that you can deliver a safe and efficacious product every single time. Obviously, those tools, there's flow diagrams, SIPOC, IPO, which is the example to the right, your process inputs, your process and your outputs that really frame up what does your process look like? What are the potential inputs? What are the potential control mechanisms? And what are the outputs that you're expecting? And how would you measure those? And so this is a set of controls that can be put in place just to really understand the process from the holistic fundamental understanding to really define what's likely important and what's most important. And obviously, the criticality of process steps is determined by the impact to your CQA profiles. And when you start looking at this, well, how do you do that, right? And there's a whole host of activities and characterization studies that can be done. And so if you look at the figure on the right, you start looking at all the different inputs and testing and analytical strategies as it relates to phenotype and function transduction efficiency, as well as just the entire production. And so there's a holistic way of looking at all of these different parameters, and you can do them one side by side, single attribute at a time, which is a very slow and methodical process, very thorough, but it's not very cost effective. Or you can use a design of experiment type approach where you're looking at multiple inputs, trying to deconvolute the assessment of practical impact of a particular parameter as it relates to your CQA profile. And so certainly, you know, if you were to go through and say, what's the best that you could probably do, making sure you have a very well robust understanding of design of experiment methodology, DOE approaches, and leveraging those to support your process characterization. Obviously, if you have FME analysis, and then really you're trying to make sure that what you do is you're making sure that you have normal operating ranges as well as proven acceptable ranges to understand is your process as it's running in a state of control, obviously things happen where you will have issues and there will be challenges and something may happen and you may have a deviation. You have to make sure, is there any practical impact around this proven acceptable range to your CQA profiles? And all of that leads you down to understand what's the most important from a CPP perspective or in process controls. The one thing that I wanted to mention on the FMEA risk assessment, obviously this facilitates an increased level of characterization based on your potential risks. But I would say making sure that you have a very thorough understanding of your potential failure modes and is very comprehensive is going to set you up for success so that you can understand, okay, have we captured everything? Have we thought of everything? Doing an FMEA that's just there to check the box and doesn't necessarily get into the level of detail that you'd expect may drive you into a situation where you're starting to identify issues, challenges, or gaps, either during the review process with the regulatory agencies or during late stage validation activities that may set you back and it may delay the commercialization efforts. So making sure that you have a thorough FMEA risk assessment tool to truly understand what's going on at each particular unit operation. And the risk assessment is described in a separate chapter as well. So when you start looking at process validation and process comparability, obviously there's a lot of different CAR T considerations. Obviously you've got the FDA guidance on process validation, where you're looking at process design, truly understanding and characterizing the process, doing the qualification or the validation activities for stage two, and then making sure that in stage three, whether it be through annual product reviews or whether it be through periodic CPV type studies, you're continually operating in a state of control. And that would be critical there to make sure that your process is in a state of control. I would also emphasize making sure that your analytical methods are in a state of control, so you don't start to see drift or shifts, or when you start changing critical reagents or reference standard materials that you start to see drifts. You've got to make sure all of that's built into this CPV process so that your process and your analytics are operating in a state of control. Obviously, comparability is critical. I think everybody probably could describe many different comparability strategies, but making sure that pre and post change are comparable in terms of safety, purity, and efficacy is critical. And I think when you start looking at, well, what does this mean from a CAR T consideration? Really it's a matter of making sure that you have the similarities established using a paired run approach. And I'll get into this a little bit, but essentially what that means is when you start looking at healthy donor materials, and this is an example, when you look at donor to donor variability, it's quite extensive. And this is for healthy donors in the autologous cell therapy space. And this was a study done at BMS previously. And so if you start looking at the donor to donor activities, you start to see many different, you know, significant shifts. And when you start to look at the variability from donor to donor versus, you know, in this particular case, it was, you know, condition one, two, three, four, and five. But you could imagine if this was, you know, in this particular case, it was, you know, condition one, two, three, four, and five. But you could imagine if this was, you know, pre-change, post-change, what does that look like? And are you truly assessing comparability? And so a paired run approach basically says, or approaches a comparability strategy where you have a healthy donor, their apheresis material is split into process one and process two. And so then you're looking at pre and post change, whether it be on vector, whether it be in process improvements, whether it be in analytics. So you can really try to deconvolute what the process differences might be compared to patient to patient variability. One thing that may be a unique challenge is when you start looking at this, obviously in the biofarm space or in the small molecule space, you're looking at PPQ and doing this on three batches and calling it good. Because you have a potential contributing factor for patient variability, you may have to do much more than just n of three to make sure that you have statistically significant ability to evaluate that process one and process two change, and basically doing a mini comparability assessment in multiple donors, and then trying to deconvolute that variability from the donor versus the process. Obviously, the material attribute controls are critical. So obviously there's a whole chapter around material and starting materials. Obviously looking at, you know, what is the potential impact on chemical and biological impurities, adventitious agents, tumorigenic impurities, as well as any other materials that may cause an immunogenic reaction. And then similar to what I had mentioned before, patient material is critical. And I have another slide on this as well. Obviously, making sure that how you're collecting the apheresis material, whether it be frozen, whether it be shipped within a certain number of hours, making sure you have validated transportation activities or processes to collect, manage, and maintain chain of custody of that patient material is critical. Because if those aren't correct, you start to induce additional sources of variability that may hinder you to truly evaluate whether or not you have a controlled process or whether or not you're simply detecting variability embedded in your upfront collection and sample handling. Obviously Obviously the health of the cells and the patients. So you start looking at prior treatment options from the patient. What are their disease state? What kind of prior illnesses? All of those impact the patient material and increase the ability or potential for source variability. Obviously, your vector controls, all of those are critical to making sure that you have a well-validated, suitable control strategy there as well. And then, ancillary materials. So I couldn't say enough about single-use systems, vendor qualifications, supplier chain vulnerability. And I kind of look at the supply chain vulnerability just with what we've gone through in the last few years with COVID. And you start looking at that. And even at GSK, we've had instances where there's been challenges because single-use systems are either sole-sourced or sourced from various regional jurisdictions throughout the world, some of which are more challenging to get than others. Some may have various to get than others. Some may have various quality standards than others. And so it really creates a potential vulnerability for the industry, making sure that you have a robust supply chain. You've got systems that will be able to be leveraged and utilized in the time when the patient needs them. I think car team manufacturing and distribution and disposition and support and reinfusion back into the patient is critical. And so any delay, whether it be one day, one week, or one month in any of these materials has the potential to impact that patient in the end. So making sure that we've got that robust supply chain and the attributes are critical for CAR T systems. And so when you start looking at patient heterogeneity, it's the primary source of variability for cell therapy programs. And so obviously the autologous cell therapy is certainly impacting the most in this particular realm because you have direct patient to patient heterogeneity, whereas the allogeneic type program, you've got multiple pools of healthy donors. And so you can reduce that potential variability. But certainly when you start looking at this, and this is an example of during this study where you had multiple donors and multiple conditions that were being studied, you start to look at the donor variability and you look at that and the donor variability was actually 82% of the total variance observed in this particular study. And then you start looking at process variability is 12.2%, analytical variability is 5.5%. And so it really demonstrates the fact that it is a challenging field and you have to be ready to make sure that all of your statistics and evaluation and analysis takes into account you may have a large portion of variability due to the patients and therefore you've got to make sure that your process and your analytics are tight and truly understand this in a statistical manner. Now, obviously, there's a lot of controls as you look at this from a procedural and a facility perspective. Looking at CGMP, and I look at CGMP, the most important is C, which is current, obviously. at CGMP, the most important is C, which is current, obviously. And if you look at the general GMP principles, obviously there's many different aspects that make a well-rounded, robust control strategy embedded in the manufacturing site. So obviously aseptic processing as well as GMP procedural controls, not only in your process, but also on your equipment, your environmental monitoring and facility design is critical as well. And on the triangle on the right obviously talks about your design, your cleaning, disinfectant studies, as well as your environmental monitoring program are critical to make sure that you have that foundational element for the controls internally. Obviously, you know, the most critical or one of the most critical is personnel training and using aseptic techniques. There may be some processes. I think most processes within the CAR-T space are designed to be closed systems. There are instances where they're not. And so making sure that you've got very good aseptic technique to minimize potential sources of variability in training. Obviously, contamination controls, whether it be through raw materials or any other environmental controls, and then making sure that you've got GMP batch records and execution. So you've got all of these embedded, a well-rounded structure that creates a foundational element so that your procedure, your facility, your documentation, your GMP-esque type approach is critical to making sure that you have a well-rounded, very robust system. That way, when you're undergoing scrutiny by the agencies during the review or during an inspection type process, you can demonstrate, look, we've got all sorts of controls, not only what's on paper, not only what's on, you know, based on the characterized and the technical understanding, but from a GMP element, we also have additional procedure controls that protect patient safety and product quality. Now, obviously, there's analytical testing controls that are critical throughout the development. Obviously, you're going to have phase-appropriate analytical testing strategies. They will evolve over time. Your selection of assays, your robustness around the qualification and validation of those assays will continue to improve throughout development. So obviously, what you're using in a preclinical non-GLP tox study will be very dramatically different versus what you're going to file in your commercial control strategy for your commercial filing. So obviously, all of these need to take into account, you know, what kind of method robustness do you have? Is it suitably qualified? Is it suitably validated? What kind of repetition do you have? What kind of acceptance criteria do you have? All of those are critical in making sure that we have, you know, as we go through development, which may take, you know, one to two years, depending on how this goes from a CAR T space, obviously adapting and making improvements to your methods as well as processes is critical. And then adapting to those, making sure that you have, you know, routine release testing as well as characterization embedded throughout. And so the other piece that I wanted to mention too is if you look at process analytical technologies, this is a powerful tool where you can leverage inline or atline analytical technologies to increase your analytical results, decrease your transcription time, increase your vein-to-vein time, because then you're getting outline testing that you don't have to send to a lab and test and then get results back. It's immediate results that'll help support a robust and efficient process. So anywhere where you can leverage that certainly is critical. Obviously, this outlines some of the analytical testing that is used throughout in-process production, and this is embedded as a part of the analytical chapter in the ASEL program. So the one thing that I wanted to mention here too, is when you start looking at analytical methods and development, really the need for innovation is critical. And obviously there's complex cellular phenotypes. You're looking at all sorts of cell health, phenotypic compositions, antigen function. So really want to make sure that you have a truly powerful technique that can leverage those new technologies. Certainly there's rapid and complex methodologies that are being developed all the time. You look at the Acelix platform, which is a cartridge-based system, which is an amazing system that can be leveraged as long as it's, you know, adapted and validated to support the actual intended use. So, but I think, you know, when you start looking at all of the analytical method improvements, the whole desire here is to minimize the vein-to-vein time. And so by the time you look at some patients, and this is what was interesting during development of branzine abecma, you get to the point where you start manufacturing a drug product. And a week or two into the manufacturing process, suddenly you get a notification that says, you know what, you know, suspend that process. The patient didn't make it. and make it. And you start to think about that and you start to realize and understand every day counts and every little bit of time to minimize that fine-to-vane time gives extra hope for the patients over time. So anything that we can do to leverage and increase the use of new technologies, drive automation, and increase use of digitalization to reduce that time is all the better. And so these are just some examples of flow cytometry or, you know, assays, qPCR, endotoxin methods, or sterility, where you say, okay, what's the current versus what's potentially new technologies. And it's amazing to see the power of some of the technologies. And so, you know, you look at the flow cytometry from a conventional platform versus a SELEX is a very dramatically different. Obviously, same thing for immunoassays, qPCR, austerility, and endotoxins. So anything that can be done, should be done and leveraged, you know, to support that. So now one thing that I wanted to talk through is really this coil of analysis. And there's been obviously there's been many different presentations on this across the industry, many different papers published on this. But I wanted to make sure that this was critical. published on this, but wanted to make sure that this was critical. And I think this is probably one of the most important aspects of building a meaningful control strategy in an autologous space is making sure that you can drive and align the quality attributes to the patient-centered specifications based on clinical output. And so obviously you can go through and look at all of your purity, identity, strength, potency, safety, cell health type studies and analytical tools and making sure that the results of those, those product attributes can be tied to patient factors as well as correlated to clinical outcomes. And then theoretically you could build those into translational markers as well as correlated to clinical outcomes. And then theoretically, you could build those into translational markers as well. And so obviously, you know, in a routine clinical study, you're looking at safety, efficacy, and pharmacokinetics. And so all of those are able to go through and really is a powerful tool during the review process to be able to go through and say, look, we have a wide variety of CQA profiles. What we have identified is that even, you know, even the ones that were extremely low or even beyond the clinical specifications, actually out of spec, actually provided some clinical benefit to that particular patient. And there was no safety impact. And so it's a powerful tool where you can go through and leverage that. I think the real question, and this is, you know, this is kind of gets to the same kind of thing, you know, what's the most important as you look at what's the appropriate specification, right? So if you look at your clinical studies and you look at your example CQA profile, you may say, okay, is your blue line, should that be your specification based on a conventional statistical analysis that might be expected in the biopharm space? And then the real question is, okay, well, given the fact that you have all these out-of-spec results that are between the blue and the red line or below the red line, right, should your red line be a more appropriate specification because you've demonstrated that you have appropriate clinical safety and efficacy for those particular patients and therefore you can leverage a slightly different tolerance interval approach. And that's where, you know, the real question is, are these outliers? Are they true, you know, random patient variability? Are they true outliers? Is it based on the disease state? All of those things need to come into consideration as you start looking at all of these different components of your control strategy. And this is an example where you start looking at the Kaplan-Meier curves, and this is built into quartile. So the lower quartile of a CQA type distribution, lower middle, upper middle, and upper. And this is where you can start to look from the clinical perspective to say, okay, well, based on all of your upper and lower quartiles, do you see any practically significant difference in your clinical output? And so obviously the scenario one essentially demonstrates that your lower middle, your upper middle, and your upper quartile distributions of your CQA profiles essentially give you the same clinical benefit. And so that might be an argument where you go in and say, look, there is no practical impact or adverse impact to patient safety or product quality and therefore we believe that the entire distribution of your cqa profile could be leveraged to support your commercial control strategy and then certainly on the right you've got scenario two where you start to say okay well what are those critical important positions and you start to see where there's almost an edge effect where that lower quartile, the line in red, starts to demonstrate some adverse impact or a potential different impact compared to your lower middle, upper middle, and your upper. It may be obviously that if you start looking at this, your lower quartile doesn't mean that it's bad. It just simply indicates that your clinical benefit is slightly different. It doesn't necessarily mean that it's adversely impacting compared to the patient. So anyway, so these are just two different ways of looking at how you can correlate your data, how you can break it up, how you can look at your clinical output and try to understand does this allow you to justify a wider specification versus a tighter specification as you go through and move this, move your control strategy and how you have those discussions with the agencies. So obviously when you look at kind of the developing of the integrated control strategy, this is builds, I'm gonna just leave it like this. Your integrated control strategy has to build upon a solid foundation of what your platform CQA profiles are gonna look like based on a QTPP. It has to start out with some of your process characterization and it builds. So as your manufacturing experience builds, as your preclinical and clinical experience builds, and your first in human pivotal and your commercial, your characterization studies will build, your comparability studies will build on your product and process understanding, which leads you up to validation and final commercialization. And so obviously it's an iterative process. So this is not something that you can simply say, we will have a commercial control strategy at a clinical stage. You've got to make sure that from a risk-based approach, you're taking phase appropriate control strategy decisions. You have obviously CQAs that are non-negotiable. You also have prospective CQAs that you just don't know early on in development what's the most important and really making sure that you have a risk-based approach. Obviously all of this information continues to build as you execute your product characterization studies. You have iterative comparability and then obviously you have this ability to look at correlative analysis. So you can have a preliminary correlative analysis based on your first in human or your phase two type studies. And then obviously you can have a final corallive analysis based on your final pivotal study as well. And what I mentioned here, and I'll talk through these in a minute. So it talks about the initial ICEM and PQRI. ICEM is the integrated control elements matrix. And I'll get into that in a minute, as well as the product quality risk assessment. And so these are just additional tools that can help you support and facilitate product development and design as well. So holistically, and this is, it's like a ray of sunshine right here, where you're looking at the total integrated control strategy has to be presented in such a way. And we've done this in the regulatory filings to present it in such a way that it's just not about specifications and it's just not about CPPs or KPPs. It's about everything altogether. And so that's where you have to make sure that as you look at this, the holistic approach and how to buy down risk to the patient, and you're presenting this to the regulatory agencies, that has to have foundational elements of your GMP processing, your raw materials controls, your procedure controls, you know, what controls do you have at the input material attributes for patient variability, what have you done to support characterization. And then obviously the routine controls for manufacturing, your process controls, set points, your IPCs, CPPs, KPPs, as well as your specifications for all of your different components, whether it be apheresis, vector, drug substance intermediate, or final drug product. And then obviously there's different elements of your periodic controls as it relates to stability, your process validation, your continuous process validation, CPV, and comparability. And all of that coming together really gives the viewer of your process a true understanding of what are we doing to buy down and mitigate the potential risk to patient safety and product quality and improve product quality so and that's why i wanted to get it obviously the the tools to support risk-based control strategy the cqa assessment is critical there's a separate chapter on that. But I really wanted to get into the integrated control elements matrix and just describe it a little bit, as well as the product quality risk assessment. Obviously, the purpose is to capture the CQA profiles and then looking at the overall risk to the CQA profiles to patient over time. And so a lot of this is based on the AMAP case study, and we've used this when I was at Amgen historically, and certainly elements of this can be applied to your control strategy for CAR-2 therapies. So your integrated control elements matrix really looks at your CQA profiles based on your process and control elements that you particularly have. So experience within the commercial process, your process knowledge, what do you have and understand that goes into that? And then the other piece is what kind of detection mechanism do you have, right? So are you doing raw material testing, release testing, IPC testing, validation? Is it periodic testing for PV or extended characterization or stability? And how do all of those come together? And so when you start looking at this holistically, this is where you can go into this product quality risk assessment. It looks at the residual risk to patient. And essentially, what you're looking at is you're defining a severity score, so the potential of impact of a CQA on the particular patient, the occurrence based on the capability of your process, what it's been characterized to, your input, your other controls, gives you this preliminary hazard assessment level. And then depending on your detectability, so depending on how, if you choose to establish routine testing or periodic testing or no testing, right, we'll give you a different profile in terms of your detectability. And the overall purpose is to look at your final residual risk. And so obviously your final residual risk, you want to make sure that all of that is essentially a low risk to patients. And obviously, if you look at, for example, in the example in attribute four, you have higher severity, higher occurrence. Therefore, you have a higher preliminary hazard level, but your detectability, meaning you're going to want to make sure that you're detecting and having a robust analytical method that's quantitative, that's capable, and that's able to understand and detect whether or not you have an issue with that CQA limit or not. You're going to want to have that routinely embedded there. And so that's how you combine down that risk. And so this whole PQRA tool can be used to develop and minimize risk to patients over time. You can leverage this as a tool early on in development to look at what's your initial risk to patient based on your current clinical controls or your current process understanding. And then obviously, as your process understanding, your process capability improves or your analytics improves, your occurrence score can change over time, your detectability score can change over time, and then certainly your severity, depending on the outcome of your clinical studies. If you had a prospective risk that was theoretically identified and shown that, hey, there's no history of failure, there's no history of this transduction issue or other thing. Therefore, you can, you know, buy down that risk over time through characterization. What this does is it allows you to go through and really understand what your specification should be. And so this is an example of an initial early phase autologous T cell therapy specification, of an initial early phase autologous T cell therapy specification, obviously looking at color clarity for appearance, identity, purity, you know, using flow cytometry techniques to look at cell viability, T cell purity, product-related impurities, as well as process-related impurities. A lot of what we've seen too is looking at strength based on viable T cells using flow cytometry as a surrogate to potency in early first in human type studies. And then certainly safety, you know, transduction controls VCN-RCL, as well as your standard endotoxin mycoinsynergy. And so obviously a lot of these are embedded based on ICH or agency expectations, right? There's some of these that can be evaluated based on real-time data, based on what you're seeing in the clinical space, and based on your experience holistically to where you can adjust and drive those improvements. So obviously, throughout the entire pathway from early to commercial control strategy, you're looking at the characterization studies, process and product, your core of analysis, and that really drives you to understand what your final commercial control strategy is going to be. And so this is an example where, you know, I think the key here is meaningful specifications established per clinical core of analysis. And I would add to that an appropriate statistical analysis, And I would add to that an appropriate statistical analysis, as well as process-related impurities, right, for process characterization and purity risk assessments, and then making sure that you have a meaningful clinical corollary analysis for potency transduction control. So it really is a tool that enables you to further clarify and adjust and shift and establish meaningful specifications. And so this is kind of the final slide. I kind of look at it, you know, we as an industry have to adapt and leverage the novel approaches to developing a phase-appropriate control strategy, looking at the industry tools, making sure that we utilize those throughout development, and then making sure that when we look at the integrated control strategy, it contains all aspects of process development, GMP development, raw material controls, procedure controls, routine versus periodic release testing elements, comparability, PPQ. All of those have to be embedded to support an integrated control strategy approach that's going to be meaningful to the agencies that review this so that there's a clear understanding that we are doing everything that we possibly can to protect patient safety and improve the likelihood of product quality to support a commercial process. And so, you know, in my mind, it's the most important thing that we can do to support our patients, making sure that they can get the cells that they need, get the treatment that they need in a very time and cost effective way. So that's, I think I've got about 12 minutes left, but thank you for listening. Thank you for coming. I appreciate that. And I think that's it. So thank you. It's my email, just in case if anybody needs it. So with that, I will stop sharing and I think we'll go into a Q&A. Yes, thank you so much, Nolan. So we have, yeah, we have 12 minutes for Q&A right now. We've started receiving some questions from the audience, so I'll get to it right away. So how do you best design or implement control strategy taking into account the patient variability and commercial spec setting philosophy from the agencies? So we can stay in control while also maximizing production success rate and minimizing unnecessary out of specification. So that's a great question. Right. And I think if you look at this, obviously, making sure that you've got enough that you don't cut corners on characterization. Right. You understand and develop a robust process. You've got to have a robust understanding of your analytics and making sure that you have enough replicates to support meaningful clinical development. So obviously it's going to be much harder if you have a clinical study with 20, you know, 20 people versus 200 or 600. So that's going to be critical for you, obviously, and making sure that you can establish that clinical correlation. And so I that that's going to be critical for you obviously and making sure that you can establish that clinical correlation and so i think that's critical um the one thing that i would take a look at and this is kind of the the uh the challenge right because as we were interacting with the agencies with the fda and others um on what would be the meaningful specification, right? From an industry perspective, in many instances, we had OOS type material that says, okay, we've got one or two patients that had OOS material and it turned out that it was a clinical benefit for them. I think the agency's perspective is coming and saying, well, we want to make sure there's process consistency. So where do you set that line? Right. And in some cases, there's a negotiation, obviously. And so you may have to set up, okay, based on an evaluation of process consistency, we may have to tighten it and establish that as a commercial control strategy. and establish that as a commercial control strategy. But I think the critical piece here too, is to make sure that there's an embedded understanding and alignment to say, okay, if and when we get a certain number of OOS type results. So inevitably you're gonna have some level of patients that will be OOS during routine release. You can do an exception release and they can still get the therapy and you can still monitor from a clinical perspective the patient outcome. But having an understanding of how many of those out-of-spec patient examples is good enough. And I think that's, I think as an industry, I think that's the hardest part for us to understand. How many is good enough, right? What's the statistically significant number of patients to where you could, you know, absolutely identify, hey, we believe that we can go through and do this and widen this specification. So it's going to come with experience and it's going to come with the number of OSs. So for example, if you have a robust process and you have 600 patients and they're all really tight and you have one patient that's an OS, it's going to be much more difficult to justify that extended specification range compared to if you had, you know, 600 patients and you had 50 of those that were OS and it was a wide variability process, you know, or just based on the patient or population or the disease state. So hopefully that helps clarify some things. It may, it may be come down to numbers. It may come down to the absolute numbers of healthy donors to support characterization as well. Great. Thank you. And maybe as a follow-up to that question, I wonder if the audience asked what your experiences are, your success leveraging the correlative analysis, similar to the examples that you shared in the presentation. And I would say my experience has been, we always started off with the most aggressive approach, right? So obviously within the industry, you're trying to balance, you know, progressive approach, right? So obviously within the industry, you're trying to balance, you know, commercialization as well as patient access, knowing what you know. And essentially, you know, some of our standpoint was we knew that this OS and, you know, cell viability was down at 40 or 50%, for example, and our spec was at 60 or 70. And we knew that it made benefit, it had a benefit for that particular patient, right? And so we were trying to push for that as much as we can. We didn't always win, but it's a negotiation and it's a discussion point. And what it allows you to do is go through and say, okay, you know, based on different statistical analysis approach. So in the biopharm space, you may be looking at a stats analysis of a 95% confidence, 95% coverage versus 95% confidence and 99.9% coverage, right? So depending on your stats, you can adjust how aggressive you are with your specs. And essentially what we had to do is we've adjusted and shifted those. And obviously the FDA wanted us to come back a little bit. But we didn't come back to the point where you would see in a typical biologic. So there was some level of understanding that we did have a clear understanding. We knew and understood what the impact was. And then certainly for an autologous serotherapy, what does that mean to patients, right? So if you have one patient that's OOS and they can't go and get enrolled in a single patient clinical study or it's delayed, what does that do to that patient? It either means that they don't get treated or it means that their treatment is delayed. And from my perspective, when you start looking at where some of these cell therapies are at, whether it be a third or fourth or fifth line therapy, these patients are at the end of the rope and they don't have a whole lot of time. And so I think that's the big thing that you're trying to balance and have that negotiation with the agency. Great. Yeah. Thank you, Nolan. How does the organizational implementation of the integrated control strategy look like? How would an organization need to operate to maximize this approach? How would you? I think it all comes together to make sure that the, I kind of look at it with the experience that I've had in the past, and it was a matter of make sure that the, I kind of look at it with the experience that I've had in the past. And it was a matter of making sure that there was a strategic partnership and healthy understanding of risk, of GMP quality systems, of development, as well as manufacturability from like an MSAP perspective, right? And making sure that all of those are embedded on the same line, ability from like an msap perspective right and making sure that all of those are embedded on the same line right and there were there were time and instances where historically you know based on a biopharma understanding your control strategy was built solely on your characterization and solely based on your cpp kpps and your ipcs and that was that and i think making sure that you have a holistic understanding of what that looks like, right? And all these additional control elements that come in from a quality GMP perspective, manufacturing perspective, the technical perspective, and the clinical perspective. I think all of those have to come together to build that framework. And so if you look at what we did, this was at Genotherapeutics, there was a lot of discussion, well, what does this really mean, right? And how do we look at this what, how do we present this in the findings? We actually built an integrated control strategy section in S2-6 that really just kind of described it all. And so therefore you could, you could combine the elements of your technical considerations, your characterization, in addition with your risk-based approaches, your GMP approaches. And essentially you say, we're looking at everything holistically, and this is how we're buying down that risk to the patients. And so I think, you know, how do you implement it at the company level? I'd say making sure that everybody has a clear understanding of what that holistic strategy looks like, getting all of them embedded and all of them aligned to truly understand, hey, this is only here to help, and we have to all do this together. Great. Thank you. Let's see. Can you speak to the timing of small-scale model qualification and process characterization? Is it required to qualify the small-scale model prior to starting process characterization? Is it required to qualify the small scale model prior to starting process characterization? It's a great question. And I'd say, you know, my personal philosophy is having a small scale model that's qualified is only gonna help, right? And I think there's unique instances where if you start looking at car teeth therapies, in many cases, the commercial scale is not much bigger than what you're doing on the bench scale, to be quite honest. There are instances where it's not fully qualified for all particular attributes, but it allows you from a comparability and a characterization to truly understand how do you manage this? What's practically different? And do a lot of those studies as you look at, you know, just due to the patient variability, you've got to offset that with number of replicates, right? Which may mean you have to do more. So having a small scale model early on in your process, it may not be necessarily needed for first in human, but certainly as you go through commercialization and having that process will only help. So hopefully that helps. Hopefully that answers the question. Great. Yeah. Thank you. I think we have time for maybe two more questions. So part of a DOE, what type of raw material inputs do you see being altered to make sure you have a robust process? So I would look at a number of lots, right? Obviously, if would look at number of lots, right? Obviously, if you look at some of the raw materials that we use in the CAR T therapy space, some are human derived, some are animal derived. Making sure that you have specific lot to lot variability embedded. The last thing you want to do is have a process that you've established. You have one raw material that works. It works great. Everything's fine. And then suddenly later on in your 10th or 50th or 100th batch, you suddenly run out of that particular raw material in that batch. Now you have a different, you know, whether it be impurity profile, whether it be metal impurities or any other biological type of impurity, metal impurities or any other biological type of impurity, you've got to make sure that you're building in a robust number of unique lots for that particular process. And I would also say, if you start looking at, you know, raw materials from an apheresis perspective, making sure that how you're handling, whether it be patient material or certainly from a characterization perspective, healthy donor material, how you're handling that. If you're going to say, we're going to make sure that we have a wide variety and make it very flexible, you've got to make sure and test very thoroughly that how you handle, store, and ship that material will not impact adversely your product. And so I think those are different aspects that could be leveraged to support just from a raw material perspective. So hopefully that helps. Great. Thank you. Okay. I'll wrap up with one more question. So how do you apply, how do you assign values to the risk assessment or FMEA in early stages when you process knowledge? I think that, I mean, there's a whole decision trees. There's different things out there and we can certainly provide some examples, but essentially with higher uncertainty, your risk goes higher. And so you're, in many cases early on, you may not know and understand what your occurrence is, you know, what's your likelihood of your process to, you know, contain a certain CQA profile or certain impurity. So you may have to go through and simply err on the side of caution and say, okay, based on this, it's going to be a bigger risk. What that does is it drives your RPN type number or your pre-QRI assessment to say you have a higher risk there. And what you're going to do is identify and support that during commercialization and characterization. We're going to go figure that out. Is this truly a big risk or do we have, you know, knowledge and understanding that it's really not a big deal? Great. Well, so we are at time. It is noon right now. So with that, I'd like to wrap up the webinar and I wanna thank you again, Nolan, for lending your time and your expertise for this topic. We really appreciate it and I'm sure the audience learned a lot from it. And as a follow-up, if we didn't get to your questions, we'll try to get to the questions and post them online. And with that, again, thank you again, Nolan. And have a great rest of your day, everyone. Thank you.
A-Cell: Control Strategy
3,673
Alliance for Regenerative Medicine
20230502
This webinar will discuss the development of a robust integrated control strategy for a CAR-T cell therapy product in the setting of current regulatory framework and guidelines. The presenter will cover various concepts within risk-based approach to control strategy generation, including various process characterization tools and correlative analysis between product CQAs to clinical outcome. Speaker: Nolan Polson, Ph.D., Vice President, Strategic Product Quality, GSK
2024-07-24T09:58:54.958903
https://www.youtube.com/watch?v=ClPPDk2U7mg
Recording is on. All right. Hello. We're going to do an overview of the program that we've been working on on computational biology. We've mostly been focusing on quantitative high-throughput screening for neurofibromatosis type 1, but we've really come to find that our algorithm is generalizable to other cancers and other types of analysis looking at high throughput screening data so a little bit of background on us we are a collective of an all volunteer collective of different groups i'm coming from the moco makers group which is a maker space in montgomery county maryland we're also part of the dmv peachy dish umbrella which from the Moco Makers Group, which is a makerspace in Montgomery County, Maryland. We're also part of the DMV Peachy Dish Umbrella, which is a nonprofit that is fostering biology and technology and entrepreneurship in the DMV area. Now our team is pretty large. We have over 40 people on our roster at this point. They come and they go, but our core is about five or six people. But over the course of this competition, we've had a lot of people help us out. NIH professionals, NIST professionals, PhDs, people from industry, people from healthcare, machine learning experts, computer scientists, just really a who's who of expertise. We've had bioinformaticians and all kinds of people. So we historically had entered a competition for the Children's Tumor Foundation and we ended up winning two out of three of the challenges that were in that competition. And one of them was an incubation prize that we continued our research into. And that was the one on drug discovery for neurocarbomatosis type 1. So we've continued that work. We ended up publishing in December of 2023. So it's a good culmination of all the efforts up until that point. Now on the other side of that, we're looking at the algorithm that we published. We want it to be better understood in the industry and by scientists, more awareness, but also generalizing our work to other cancers and other aspects, even within NF1. There's a lot more we can do with it than we had just on our original data set and with our original premise. So what we're going to cover during this slide deck and during these presentations is where we are and where we've been. So a lot of where we've been is this initial development of S-indices. We'll get into what that means coming up but basically these are ranking criteria for ranking potential compounds for treatment and our original publication was around something called delta s we are now also looking at a variation of that called S prime. So our current status is we're looking at the S prime indices. We're trying to look at that and generalize it. So how can it be applied to other cancer cell lines? You know, generalizing to lots of other types of cancers and diseases as well. And then long range, we have some stuff coming up as far as combinational drug evaluations, rare disease drug evaluations, and neural network modeling. So let's look at our background for the syndrome. And then Paul, if you ever want to speak up, go for it. Just feel free to interrupt me at any time. Neurofibromatosis is a rare genetic disorder. It's defined by some kind of mutation in the NF1 gene. A little bit on the notation, because this is really important for the industry. When you're talking about the NF1 gene, you're going to italicize it. And when you're talking about the disease, you have it bold like this. That's really important because the terms are reused a lot. So having really clear standards for which you're referring to which, especially in literature, is very important. But we'll try and follow that convention here as well. So it is a gene with a protein product called neurofibromin. You can consider it a tumor suppressor. It's part of a complex that does affect cell growth. And when there's a disorder with it, one of the effects is the creation of many benign tumors. So it is autosomal dominant, which means that if you have one copy of it in your two pairs on the chromosome, the dominant one will have an overwhelming effect in the phenotype. And most NF1 patients are born with one mutated gene copy per cell. So what tends to happen is that most NF1 patients are plus slash minus for the gene, meaning that they have one wild type and then one mutated version. The reason you don't see minus minus very often, it tends to be fatal in the embryonic state. So a lot of times those people that are born minus minus don't make it to adulthood. But in any case, most people are plus minus with the disorder. But in any case, most people are plus minus with the disorder. And you will see minus minus crop up later in our slides related to cancers. You can have minus minus in your cells if it's in a tumor state or there's some additional mutation that's happened. But for the most part, you can think of most patients as heterozygous meaning they're plus slash minus it is a multi-system disorder which we'll get into coming up as far as some of the symptoms but it does have other than the physical parts it also has some cognitive effects learning disabilities adhd common complaints are pain and decreased quality of life. And it predisposes patients to developing tumors. Now the tumor that we focus on is plexiform neurofibromas. We'll get into that coming up. But these tend to be tumors of the myelin sheath that are on the inside of the body. Another very common one are cutaneous neurofibromas, which are the same thing, tumors of the myelin sheath, but they tend to be more surface, towards the surface of the skin. And occasionally those become malignant, and when they do, they're gonna be referred to as MPNST, malignant peripheral nerve sheath tumors. So again, it's a rare disease. It affects one in 250, sorry, 2,500. This means that from the FDA's perspective, it would qualify for an orphan drug status, which means that nobody's really researching it. So what you have to do to be approved is a little bit lower. But it's fair to say also that a lot of the pharmaceutical industry is not looking at this because they don't see this as a large market. So it's something that does affect a large number of people, but not necessarily large enough to move the needle from a commercialization perspective. to move the needle from a commercialization perspective. So the actual mutation itself can be as little as a point mutation. There's literally one base pair that can flip, creating a different amino acid, and that can basically cause all of the effect of the disorder if it's in the right location. But it can also be a set of mutations. So when we have patients that all have NF1 but some are responsive to diseases and others are not and they have different states, one of the things involved there is that the mutation may not be in the same location, right? You might have a defective in a really critical region or it might be you have a few in some somewhat moderately important locations but not the critical region you know there's different ways of looking at that what ultimately deactivates it and those do matter because it could be that your protein is partially effective at its original function, but there's some nuance there. It is a very large protein. This is 320 kilodalton. It's a way of saying the size of proteins, and it's generally a pretty large one compared to a lot of other ones in the body. It contains 2,818 amino acids. It contains 2,818 amino acids. So this represents the main symptoms. The ones that we're really focusing on really are the tumors. It's one of the hallmarks of the disease, which are the, these are cutaneous neurofibromas. So these are the surface ones. We're looking at the plexiform in our original paper, which are interior to the body, but still just as prevalent all over. And these can be very painful. Most of these are benign tumors, but they can still be life-threatening, even if they're benign. So one of the big issues is, you know, if they grow and they're impinging on an important tissue, they can kill that tissue and that can be very, that can be fatal. They are also very painful. So a lot of the the the the disfigurement it can also have other effects so there are cognitive effects as well learning disabilities headache seizures macroencephaly oversized head so sometimes what the families are complaining about is that they need support, right? The families have trouble raising children with NF1 sometimes because they have learning disabilities, cognitive issues, as well as the physical things. But one of the points of this slide is that this is a whole body disease. So there are scoliosis, which is curved spine, leech nodules, which are little specks in the iris, optical gliomas, chronic constipation, diarrhea. And then these are hallmark symptoms. These are cafe au lait. These are little dark spots that form on the skin, but you can also have crackling in the creases of the skin. So a lot of times I like to say that the neurofibromin protein, it plays an important role in connective tissues and one of the things that it's associated with is microtubules. And microtubules are an important building block protein in axons as they're forming. Axons being parts of the nerve that are growing out and connecting to other nerves. And if those aren't being regulated properly, then you have problems with cell growth of the neurons. And that kind of logically relates to learning disabilities. But some of the other things with the high blood pressure, you know, there are blood-brain vessel defects. So the lining of the vessels of blood in the brain, that's also an important factor. vessels of blood in the brain that's also an important factor and then of the myelin myelin is a really important one to understand because it's a lot of these are myelin cell tumors myelin is a cell type that wraps around the axons of neurons and normally it's insulating it so that the signal going down the axon which which is this long finger-like projection from the neuron, that electrical signal going down the nerve, it actually travels faster when it's wrapped by the myelin. So it's normally important for the central nervous system, but it also has a role in the peripheral nervous system. also has a role in the peripheral nervous system. And it is what, it's the tumor type that is, it's the cell type that is part of the plexiform and cutaneous neurofibromas that we're looking at. But there are, again, there are a number of cell types and it is important to think of this as a whole body disease. For our continued research, we'll start to hyper-focus on the cancers, but please don't forget that it's not just cancers that are affecting. It is the whole body, and it is very important to think about what are the other pathways involved in the cells, and what are the other cell types involved in this disorder all right so when we look at NF1 type 1 by age first of all there there is a second type NF type 2 it has different it has different proteins involved but there is some overlap in some of the downstream proteins. So we are focusing on NF type 1. And again, that's the NF1 gene is probably the one that's mutated for that. In the embryonic version, you could have developmental disorders coming up out of this so the there's improper formation of dendrites in the hippocampus hippocampus is related to storing memories and neurite growth and growth cone formation problems so these are again about how neurons are growing and at the very tip of the part where it's growing, you have deformation in some of these proteins. So it's just not able to grow properly. That would, of course, logically relate to some of these cognitive disorders that we do see coming out of it, learning disorders, ADHD, cognitive impairments. And again, the families are looking for support and awareness around these developmental issues again if it's if you're minus minus it's probably fatal so a lot of this is again that heterozygous in childhood most often you are diagnosed with nf1 by age five that's because that's when you start to show some of these other symptoms cafe ole spots the benign tumors and some of the other diagnosable traits and the needs when you're at this age your social social issues around disfigurement there's the pain there can be the mortality issues going from childhood into adulthood typically you're going to have the benign tumors and 15% of those will go from a benign state to a malignant state. That is absolutely a life-threatening condition if you have a malignant tumor. So the treatments are going to change very much. Now we need life-saving cancer treatments. One thing to keep in mind with all of the compounds that we have coming up, they're all essentially chemotherapies. We're talking about something that is killing cancer, and those are toxins. So it's very challenging for us to have somebody on a toxin for a very long period of time. If it's a life-threatening situation, if you have malignant cancer, we're going to give you the toxins that are bad for you because they're worse for the tumor cells. When we talk about people that don't have that and they just have the benign tumors, their options are more limited. The doctor isn't going to give you such a strong toxin. And so that's a known problem in the NF1 field. What are we doing for people with benign tumors? Most often they're doing a surgery, but that's not always an option. Sometimes you have tumors that are really near critical organs and you just can't do a surgery near there and they don't have many options. We'll kind of get into that coming up. So let's take a step back and talk about the clinical trials process. I'm going to hand it over to Paul. Paul, if you could kind of do a little summary of what does the pharmaceutical investigational process look like? Okay. The clinical trial format arises out of an extensive series of preclinical investigations that look at all aspects of the manufacturing as well as the drug action. So those are all wrapped up into a preclinical evaluation of the drug as it moves forward. All of the preclinical studies and clinical trials themselves ultimately are regulated by the FDA, and it's the FDA, the Food and Drug Administration, that is involved in the approval process to allow the drug on the market in the U.S. on the market in the US. The intent of these clinical trials is to show safety and efficacy. That remains from the onset in the earliest phases all the way through the late stages and commercialization. The preclinical investigations usually and typically will involve in vitro studies. That also includes high throughput screening. And we'll be talking more about that later. In silico studies, animal studies, there's a heavy focus on animal studies that try to mimic the human disease. Those all lead to a phase one study, which is a very limited study in humans, and it's primarily designed to establish safety. Phase two studies is a limited study looking primarily at safety, but also efficacy, looking primarily at safety but also efficacy, whether the drug actually works or not. But it is very limited in the number of patients that it looks at. Phase three are large studies involving thousands of patients and is involved in looking and establishing the proof of safety and efficacy. and establishing the proof of safety and efficacy. Phase four is marketing. But even after marketing, when a drug enters the market, it is followed for anywhere from five to 10 years to ensure that its use is both safe and effective. And it's important to note that clinical trial costs can approach nearly three quarters of a billion dollars depending on the drug that is involved. So it's a huge amount of money that's involved in clinical trials and drug development for each one that comes to market. And underlying this all is the initial drug screening that is intended to assure that only good candidates, drug candidates, are actually brought through the entire process and leading up to clinical trials. And this is especially important in underfunded areas like rare diseases, one of which is the NF1 syndrome. Matthew? Matthew Kuehnert Thank you. So we won't go into this too much. We just wanted to give you a survey of what drugs are available in NF1. And the answer is there's really only one down here, selumetadib, which is approved for pediatric patients with interoperable tumors. So at this time, it's not approved for adults and it's only for pediatric patients that loose tumors can't be dealt with with surgery it's a very very narrow set of people we in our data we have reason to believe some like so you mention that isn't even the best compound we think we see things that are more effective or potentially more effective. So, you know, it's just, it's not the ideal compound. One of the things that also happens is that you can become resistant to this drug. So it stops working over time. And that can be a very serious challenge for some people. But there are in various phases other compounds folks are looking at. And we are hoping as an industry that more solutions are presented to the patients. So I wanted to go and dig into how does neurofibromin work. And remembering that it has multiple functions in multiple cell types we're really going to focus on the the archetypal role in myelin which is causing the the the cancers but it's going to have variations of this and different pathways for other cell types but But in something like myelin, we're going to look at a receptor that's receiving growth factor. Normally when the growth factor comes in, it's going to turn on this Ras protein here, and it's going to go from a GDP state, guanidine diphosphate, to a Ras GTP, guanidine triphosphate. And in this triphosphate state, it's going to have activation of these downstream pathways. And all of these downstream pathways end up resulting in cell growth, cell proliferation, and survival. The role of NF1 essentially is to help with deactivating that activated GTP. So you can imagine if you don't have NF1, then you have this activation turning this thing on and nothing to turn it off. So it's like a car with the accelerator all the way down, and then the brake pedal is is broken you just have nothing to turn off the activation state so it's very important uh to have nf1 this does form a complex this is a little over simplification but this should give you the flavor of what we're talking about i go into a little bit more detail here um so what we have are pictures showing a kind of structural perspective. This thing at the bottom is the whole neurofiberman protein. You'll see that again, it's a very large protein, as we said. And then we have a few separate proteins up here. The brown and orange is the receptor. There's these other things that interact with the receptor, the spread 2, and then you have the RASP GTP over here, which is normally kind of floating independently, but it can form a complex with this whole thing. So you have the NF1 and the receptor and these other things. And again, its purpose is to cleave one of the phosphate groups. We're going to end up calling that a GTPase. But the main function of it is to remove one of the phosphate groups. Thinking of this GTP, it's very similar if you're familiar to ATP, which is the main energy source inside the cell. You have glucose in the blood and it's broken down into ATP. ATP is adenosine triphosphate. And what happens normally when you have movement inside the cell, you're normally converting an ATP to an ADP. It's very similar to this when you when you do it with ATP you're releasing energy and that energy is generally deforming the the proteins doing something by changing the shape of the protein so that's very similar to what we have going on here except the emphasis with the the GTP is again as a signaling cascade, which we'll get into a little bit more detail. So just to go through some other points, the NF1 gene product is neurofibermin. It's a multifunctional protein, and it's expressed in most tissues of the cell. It's a large protein. functional protein and it's expressed in most tissues of the cell. It's a large protein. It has two isoforms, which means that two structures that are similar but different in terms of their actual structure. And those come together to form a dimer. A dimer is two proteins put together into a complex. And it is typically associated with the inner surface of the plasma membrane so again the inside of the cell wall again we're embedding ourselves in the cell wall up here almost always for most signaling proteins and it associates with many protein binding partners we focus in on some of them, but there's more than what we're covering. And it is an upstream activator of other types of proteins, including cytosolic kinases. Cytosol is the interstitial fluid. No, it's the fluid inside the cell. And kinases are enzymes that are normally activating or deactivating something. And we're going to throw some terms out at you. This is just so you start to make recognition of them. Like, oh, I've seen that before. You'll see these very commonly in cancers. So we're not expecting you to know what these are to remember them. But we want to start introducing them so that when you see them come up in pathways and in the cell signaling, you start to associate them with cancers and with cell growth and proliferation. So we have R RAS gtp ace activating protein which regulates several signaling pathways Rath me ke RK p i3k a KT and mtor row lmi k-AMP, PKA, those are important ones. And that's all to say, again, that we have upstream and downstream signaling, and that what we're trying to do is work with proteins that have a cascade of downstream effects, and those downstream effects relate to cell growth. So we kind of go into that a little bit here. This is actually a more comprehensive version of the first graphic, but here we have the neurofiber min. It interacts with the RAS protein. So when we talk about NF1, NF2, the one thing they all have in common is that they're called RASopathies. RASopathies is a class of disorders that somehow relate to the RAS signaling cascade going wrong. So in the case of NF1, what's going wrong is that we're not able to turn off this Ras-GTP, which is the activated state. We're not able to deactivate it to the Ras-GTP state, DP, sorry, D for diphosphate. Now, in its activated state, again, there's the RAF, there's the MEK and the ERK, there's the AKT and the mTOR that are relating to cell growth. I know that the neurofibromatosis type 2 has some stuff over here, and this is the cell nucleus. I won't go into that too much, but generally there's going to be these receptors. Over here, we have an epithelial growth factor receptor I think this one is a receptor tyrosine kinase different types of receptors and different types of things in the microtumor environment which is just a fancy way of saying the the the the all of the interstitial fluid around the cells that have the tumors. There's growth factors and things that are hitting the cell surface that are promoting that growth. And that's turning things on that turn things on that all relate to tumor formation. So we're going to kind of continue from there. Before I go too much deeper, I'm going to take a pause and Hassan, do you have any questions at least high level? We're not going to ask you about the pathways exactly, but do you have any question conceptually about why NF1 is involved in tumor formation? Well I Well, I don't have a background in biology. My question is, so the analysis that we will be making through the Python development, is it to come up with a general procedure of assessing drugs, or is it like working on a specific proposed drug to treat this disease? It's more general than not. What it is, is it's compound agnostic. It works no matter what the compound is. So when we look at high throughput screening screening which we'll get into coming up uh i think uh very shortly uh it's it's about looking at we're going to throw a thousand compounds at it and of those thousands which one worked and again we know that what's going wrong is that we're not able to turn off the RAS signaling. The RAS signaling up here is on, NF1 is broken, so we're not able to deactivate it. So a lot of the kinds of things, like, for example, that selumetinib, the one approved, is an MEK inhibitor. So this MEK, which is downstream from this signaling up here, if we're able to inhibit it, then it's kind of like applying a break. We have less MEK, and therefore we have less cell growth, proliferation, and survival. So even though NF1 is broken, if we give you this MEK inhibitor, that helps to address the tumor formation. But basically, we're looking at all kinds of compounds and seeing just which ones work and then trying to answer after we've identified that they work, kind of figure out why did they work? What part of this signaling pathway did they affect so that we know how they did what they did? Does that make sense or do you have any questions? Yeah, that makes sense. Okay. Yeah, we'll get into the high throughput screening in just a few slides. So what we're gonna go through right now is a little bit more of a deep dive into what we're working on or what we had worked on historically, which is this initial development of S indices. So just to clear up some confusion, S and S prime are different. They're very similar in approach, but they are different algorithms. So what we published historically are S and delta S, which we'll get into right now. But then just keep in mind that when we get to S prime, it's slightly different. There's a different calculation involved. So again, some of the history, this came out of the Children's Tumor Foundation Hackathon. It was all about high throughput screening data. That's the only type of data that we are looking at right now. And the original data set was all about plexiform neurofibromas, which are, again, benign tumors. We're not talking about malignant tumors at this point in the evaluation. We're just talking about very specifically plexiform neurofibromas and which are the compounds that we are finding that address this type of tumor. And the tumors that were sampled were all the NF1 minus minus for the most part so that's just a a side note so our goal was to establish a robust algorithm that identified new or repurposed drug candidates for the preclinical evaluation. So we know that it's very expensive to go to clinical trials and there isn't much money in the rare diseases. It's very challenging to throw money around in this space. Money does exist, thankfully, through some of these foundations, but it's very challenging to do these kinds of studies. So generally, before you do a more expensive study, whether it's an animal study, or, you know, if you can get to the human studies, great. But it is very expensive. And you want to kind of know ahead of time, if it's going to be successful and that's where really good and nuanced analyses that you can do for cheap uh cheaper relatively speaking are really important so high throughput screening data anything anything you can do in situ um or in in silico is again with using algorithmic modeling anything you can do to kind of clear up and and remove the false positives as early as possible that's what you want to do it's also helpful for identifying the mechanism of action so that's in biology that's the way of saying how does it do what it does oh it you know for example the selumetinib we talked about it do what it does oh it you know for example the cell you met in it we talked about how how does it help with cancer well it inhibits this part of the pathway right here it inhibits as any K so that's what it does its role is to reduce cell growth and proliferation by going lower in this signaling cascade and inhibiting that point so for for that compound, selumetinib, its mechanism of action is MEK inhibitor. But finding that out for these new compounds can be very, very helpful. Again, we want to have a biological justification. Why do we think this compound is effective? Step one is, okay, we found that it seems to do well in our screening of the cells, but why? Can we answer that? So ultimately, all of this is supposed to inform our clinical response forecast. So can we look ahead and project how likely this compound will have a good effect in clinical situations. Clinical, again, are clinical trials which involve humans. So by the time that we're involving humans, do we have a strong reason to believe that it will have a good effect? It is known that high-throughput screening data has a poor association with clinical outcomes. It's not a one-to-one all the time. It's a problem. So you can get promising molecules out of it, but it's not a magical algorithm that always works. So anything we can do above and beyond a classic high-throughput screening analysis to also support why these compounds are going to work in humans. It's all going back to the point of being more effective with the money, more effective with how we're guiding future assessments. I'm going to take a pause there, Paul. Is there any flavoring you want to give that or any kind of additional caveats? No, I think you're doing well. You just go ahead, go right ahead. Okay. All right. So a little bit more on the project background. So high throughput screening tests in cell lines over multiple drug concentrations so I think what we're trying to say here is not worded quite well but I think what we're trying to say here is we are testing multiple drug multiple compound concentrations and once we have once we're for those multiple drug concentrations we are evaluating a cell response so let me just change that up a little bit so I've heard but screening so it tests compounds at a variety of concentrations and evaluates cell responses and those can be viability or reaction the one that we're looking at for ours is viability which is how many of the cells survived. Again, think of this as chemotherapies. We're trying to kill the cells and how many of them died or survived. The responses that you get when you actually plot the activity level that you're measuring versus concentration uh they're going to look like a sigmoid curve which is an s shape uh and they use a similar uh regression mode in analysis so all all sigmoid uh data uh are use some kind of uh analysis which we'll get into coming up uh so different cell lines uh in the uh different cell lines can be used some of the cell lines are the benign tumor cell lines in our original data set we also had non-tumor cell lines, and that ended up being very useful because, just to give you a preview of what we'll have coming up, we're going to look at a difference between what the compound did in normal cell lines versus what it did in the tumor cell lines. All right, so a little bit about high throughput screening. It is a large operation. Normally you're going to use robots, something like this on the left, and you have these multi-plate wells. And so you're going to have, you know, thousands and thousands of these wells. And each well is going to get a copy of the original cell genes, sorry, of the original cell line. And these original cell lines, by the way, are clones. So you take a sample of somebody's DNA, sorry, you take a sample of somebody's tissue. And then you actually you need to mutate it so that you can make the clone. So this is where some of the problems starts for this style of analysis. You have to actually immortalize the cells, which means that you mutate them so that they do propagate forever. And then you also add another protein in there, a genetic sequence for a protein that also helps it glow in certain lighting. So a fluorescence, basically. And that's going to help you downstream when you're trying to measure it. So what's going to happen eventually is you're going to give it the compound and then you're going to shine light on it. So what's going to happen eventually is you're going to give it the compound and then you're going to shine light on it and you're going to the cells that have that mutation for the luminescence are going to glow and you're going to measure how much the cells are glowing. And the intensity of the light is going to tell you how many cells there are. So you're actually essentially able to count or at least come up with a metric that relates to the number of cells after giving a compound. So if there is more light, if there's more luminance, there are more cells. If there's less light, there's less luminance. And that's the fundamental activity that we're measuring. when we give these compounds at different concentrations, we're measuring after that how much, how many of the cells survived based on how much lighting there was. Are there any questions on that, Hassan? This was actually like the example that clarified it the most you basically like assume what the results are going to be based on other experiments that have already been done my understanding that right um i don't think i quite understood the last part um basically what you're doing is there are already data based on experiments that have been already done, right? That's correct. Yeah, so you use them to predict the results before actually doing the experiment. That's right. So the data set that we're starting with was the outcome of this setup. We did not run the robotics. The people that ran the robotics released their data, and we used that data to do our analysis. And that's very common for what we're looking at. We're mostly focusing after the data has been generated and doing the computational parts of it after that. But yes, somebody somewhere actually physically did this rigging, and that was not us. That was prior researchers. All right. and that was not us that was prior researchers all right so um when you do these high throughput screens again you get you get data and and the data is this at the bottom this is a dose response curve and you have the concentration of something that you're adding and then you have the activity and as i mentioned they tend to be this sigmoid shape. You can kind of look at different cell lines or different variations and kind of do a comparison. This model that's the sigmoid, its technical term is a four parameter regression model and we'll get into that coming up. But when you turn your data points at these different concentrations into the sigmoid, that's what you're doing. You're running a four parameter regression model on it. All right. So the data that we started with was many, many compounds. We're careful with our wording between drugs and compounds. Drugs generally has an emphasis of already being approved by the FDA for something. So a lot of these are experimental. They may not be approved for anything. In which case, a more general term for us is compounds. So a lot of these are experimental. They may not be approved for anything. In which case, a more general term for us is compounds. So we have these large set of compounds. Some of them may become drugs at some point. Some may be drugs that are approved for something else. So that's actually a really good situation for us. If we can find things that have been approved for other diseases that we can reuse here, or the term is repurpose, if we can find things that have been approved for other diseases that we can reuse here or work the term is repurpose if you can repurpose those drugs that's great because we know that it's safe in humans and it may be able to help this this patient population it's still a challenge from a economic modeling perspective but you know one problem at a time. We'd rather find scientifically what are the best compounds for this treatment. We have a mix of plexiform neurofibroma tumor cell lines, and then we have a set of non-tumor cell lines. And in our, strictly speaking, in our data, we had 11 concentrations. And for each of those concentrations, one reading of what the activity level was for that cell. And again, the four parameter regression model, it has some aspects of it, which we'll get into. which we'll get into but to give you a preview uh it has a an upper asymptote uh and then the the the slope of that s the middle of the s there's a slope there and right at the middle point there's also a concentration called ac50 or where 50 of the activity is happening and then there's a few other parameters the r squared is how good of a fit to the model it is. So if you have points that don't follow an S shape, the R squared will be low. But if they perfectly follow an S shape, they'll be one. So generally it goes between zero and one for how close it fits that. And then AUC stands for area under the curve. It's another way of doing analysis where you just take the area under the sigmoid curve. Sometimes people calculate that and use that. We didn't end up using that currently, but that's actually future work that we have coming up. So when we look at which the cell lines are that we actually end up using, they have these labels. So these are the labels for four different cell lines are that we actually end up using they have these labels so these are the labels for four different cell lines they all start with this ipn that's a convention that we inherited from the original uh the original uh uh authors that did the study and you'll become familiar with them. You know, we kind of focus on the last parts of it. So 0.55, 6, 2, a, uh, nine, five, one, one B nine, five point six. Those are all the tumor cell lines are, we only used one non-tumor cell line as our reference. And that's this nine, five, one, one C. So you'll become more and more familiar with that as you work with the data. But we do know which ones are which. And again, our the one thing to know about the non-tumor, it is non-tumor, but it still comes from an NF1 patient. And that's actually ideal for the kind of analysis that we want to do. Who would be receiving this treatment is NF1 patients. And what we essentially want to do is we want to find compounds that are more able to kill tumors than non-tumor tissue. So we want to leave things like their brain and their heart alone. We just want to kill the tumor. So we want to find compounds that have more activity in these cell lines than in this one. This one we want to see very little activity because that means it's killing tumor tissue. And you can get this online. This is a public synapse. This is publicly published data. You can request access to it, and they just give it to you right get this online. This is a public synapse. This is publicly published data. You can request access to it, and they just give it to you right away these days. All right, so this is more of a deep dive into what a dose-response curve would look like. Again, you're creating this model, which is the sigmoid curve. And essentially, when you have very low concentration, you know, you'd have very little effect. And then at some point, you've added enough concentration to start having an effect. And then at some other point, you've given so much that you've saturated the effect. There's no more effect you can get by giving more of the compound. And so that's why you have this sigmoids and they they have these asymptotes here uh the upper and the lower and so what we're calling a uh is this this upper asymptote and d is this lower asymptote so we're going to follow that notation in some of our future slides, but when you see A and D, we're talking about the activity levels at those asymptotes. Now, there's a special point right here in the middle, which is the AC50, and that is the concentration where there's half of the maximal activity. You'll see other similar terminology, EC50, which is 50% of the effective concentration. And then sometimes you'll see IC50, which just has to do with inhibitory concentration. They all kind of mean the same thing. There is a different emphasis, like inhibitory is a certain type of reaction. But the AC50 and the EC50s50s for our purposes are interchangeable and you you will see both in literature this formula by the way is the formula for creating the sigmoid curve so if you're if you're creating your model this is how you would create the model for it but it's it's enough for us to look just actually at A, D, and C. That's the heart of our algorithms, just A, D, and C for the most part. So A, again, is extrapolated as the asymptote at the zero drug concentration all the way up here. And D is extrapolated to an infinite drug concentration over here um when you you know you obviously we're not actually going to these those extremes but we are able to extrapolate using the model um all right and i think we'll probably stop just about here. Basically, just to give you a preview of what we have coming up, what we actually care about are the relative differences between different types of responses. So you see that there are some responses that follow a good sigmoid curve and then you're able to look at the AC50 for one of them and the AC50 for another and make a judgment call on which one is maybe more or less effective. So we'll get into some of that at our next meeting but it's going to eventually become, how can we create a single value that kind of adds both the information from the height difference between the asymptotes and that C value into a single value? And then how can we create a comparison of that value in the tumor line, which is SREF, and then the non-tumor line, which is S-Test. So we'll get into that more in our next session. But basically, understanding at this point that sigmoid curves are at the heart of analyses for the high throughput screening data, that we're looking at concentration and activity response, and that we're able to generalize our data, looking just at these points, A, D and C, which we're gonna use to power our algorithm or our formula coming up. So at this point, are there any questions? Nope, it's pretty clear. Okay, great. I'm gonna turn off recording. Give me a second.
NF1 Biology and Foundations - Computational Biology - Part 1
3,219
MoCo Makers Group
20240603
2024-07-29T10:03:54.234090
https://www.youtube.com/watch?v=6aQCKOEZRU8
Hello. Recording is on. All right. Thank you for joining us. We are going to continue our introduction. All of the materials to date have been things that have been published and have been turned into the nf.mocomakers.com web app. Everything from this point forward is either exactly where we are now or totally net new. Almost all of this we're going to need to codify into parts of a web application and generalize what we're seeing here to other types of data, other types of data, specifically other genes, other cancer types, and just anything we can do to generalize our approach. So what we have, where we are right now is going into the development of the S-prime algorithm. So I will, Paul, I think most of these slides are on your side, so I'll just let you kind of talk through most of them. Okay. So, on this slide, it illustrates a little bit of why we wanted to make a change from S to a different indices that was more inclusive. The S indices is very good at handling situations where there is a classic inhibition. And by that, we mean, if you look at the curve where you have a sigmoidal curve that goes in terms of its response from downward. So we're looking at here, the sigmoid curve on the bottom right is going from 100% down to zero. That's a classic inhibition curve. But even in the original data, we had things that were not behaving well and we were having difficulty with. The one with the open circles is one that we had a problem with because it was very close to zero. And anything that is close to zero in terms of providing a flat curve was very difficult to deal with. Even more difficult to deal with was curves that we needed to use the S algorithm. And you can't take the log of a negative number. So we were stuck. And what we left on the table then was a lot of data that was either close to zero or had developed a negative number and we couldn't assay. This is not an unusual thing. It happens in many databases, but it does lead to an incomplete assessment of the overall impact of the compound. We're only looking at those compounds, in other words, that give us a classic inhibition curve, and that's not necessarily reflective of what is going on in all the cell lines. And this could, in turn, reflect from a therapeutic point of view, what would be happening in a population of patients. Some of them would respond. Some of them would have no change in the overall effect. And some of them, the tumors would continue to grow, which would be illustrated by the upper curve. And in preclinical studies, one of the things you want to be able to do is capture all the variations to get a good idea or at least a better idea of what's happening with the various drugs that you're interested in. And just focusing on those cell lines responses that give you classic inhibition again leaves things on the table and you have an incomplete assessment on the next slide thank you um i want to go back just briefly to put some numbers behind that so uh we had the original screen 1,912 drugs only 600 had parameters we could use the rest over a thousand were unqualified right and and so it's it's a large data impact that that we're calling out here okay so because of this large gap in cell lines that we just couldn't assay, we wanted to go to something different where we could generate, still generate a single value using the same input parameters that we did with S, which are based on the four parameter logistic function and also include negative and zero type values what we came up with is this a sine H transformation a sine H is a transformation that involves a log a natural log function so the ln the beginning of the open parentheses is to indicate that it's a natural log rather than a log function and this particular function is used widely in economics where there is is used widely in economics, where there is negative values, zero values, and also positive values, and has been used not only by us here, but by an increasing number of scientists in the chemistry and biology realms, where there is recognized to be a need for looking at, again, positive values, negative values, and zero. And fortunately for us, this function has been codified into Excel, Python, and R. So it's easy to use directly without having to develop it de novo. So on the next slide. Before we did move on to actually using it, we wanted to make sure that the S value and the S prime values were giving us numbers that would be essentially identical. And to do that, we did a comparison where we first plotted them, the two values, the s prime value and the s values against each other and what you can see from looking at the graph when the and the blue dots is that there is essentially a something that looks like a 45 degree angle where all the dots, all the data falls on a straight line. This graphically illustrates that there is a direct correlation between one and the other. We then also did a Pearson correlation coefficient analysis and that's reflected in the graph where it says R, R is equal to 0.999. And what that indicates, again, is that there is a direct proportional relationship between s and s prime, where one, an r equal to one would be a perfect fit. So we're very close to that. And we did this for a number of different cell lines and data outputs just to make sure that we weren't missing anything. So again, what this establishes is that we have, what we're looking at when we describe S and s prime is the similar endpoint. It's not the same number because one is in log space while the other one is in the natural log format. But nonetheless, there is a one-to-one correlation in terms of the overall data display. On the next slide, this is sort of a flow sheet of how I've been handling the data and we started to move this into some of the Python-like activities uh we initially start with a parameter input whatever data source that we're using of of uh particular importance for us is to be able to make sure that we're capturing the asymptote at the zero concentration the abs and the abs and the asymptote at the infinite concentration plus the AC50. Those three values for us are the absolutely critical values to allow us to assess the value S or S prime. We then separate the cell lines by sets. And by that, I mean either reference, we designate reference cell lines or test cell lines by sets. And by that I mean either reference, we designate reference cell lines or test cell lines. And these have the same or similar definitions as we were, as Matthew had described previously. We then determine what we're calling EFF or effectiveness. determine what we're calling EFF or effectiveness, and that is the difference between the two asymptotes, the zero minus the infinity, the infinite drug dose, leaving temporarily the AC50 alone. The AC50, however, is then incorporated such that we incorporate both effectiveness and potency into one value. That's in the upper right-hand corner. That value is then transformed then transformed using the A sine H transformation. And then we just repeat everything for all the individual compound cell line combinations. Then the mean S prime is derived for each compound and cell line. And based on that, those means, we calculate the delta S prime, which is the difference between the S prime from the reference cell lines and the S prime from whatever we're designating as the test cell lines. we're designating as the test cell lines. This fundamentally is the core of what it is that we're doing, and everything builds out of this. On the next slide, this is a curve of an actual data set that we had. I just wanted to show you what the overall curve looks like. You can see that it has basically two components or two major components, one of which is curvilinear, which is seen on the upper portion going from about 15 down to about 5, and on the other side going from about negative 15 to about negative five as well in the middle there's a linear component and this is this is totally typical of a sine h transformations there's a linear portion and curvilinear portions to the overall curve i want to uh yeah i'll just leave it at that for the time being on the next matthew uh this was intended more for the biologists but um talk about it anyway uh but talk about it anyway. I wanted to provide for people an overview of what known chemotherapeutic agents may look like in this one cell line, just by way of reference. So the red dots that are superimposed on the blue line, blue curvilinear line, represent various drugs and their average S prime value. And you can see that there is a, for all these known chemotherapeutic drugs that are actually used in patients, there is a variation that goes down. a preferential ranking that you can come up with based on where they are on the curve and what number they actually are generating. Again, all of these drugs represent known chemotherapeutic agents, and they go from the topmost one, near 15, which are very nonspecific and will kill just about any cell, to ones that are more specific in terms of their biological function as they go down. And that specificity is in part reflected in what you see in the variance. Okay, on the next slide. Before continuing, is the source for this chart, is this the MIP 3.0 data set or is this the depth map? No, this is the depth map data. Okay. All right. Thanks. So everything that we'll be talking about from here on out is done with the depth map data. And to make a connection for you, Hassan, the depth map data is a net new data set. So we don't have anything published on it. We are just getting to it right now. It's what we'll be starting to work with more and more. I misspoke. So there will be a couple of slides before we make that transition. This one was actually made, the one with the blue line here was made with depth map data. The next slide is actually a regression back to the plexiform neurofibroma line. But it was made using the S prime transformation. And one of the reasons that I put this in here was to show that if you look at the MEK inhibitors, which are known to be effective in treating plexiform neurofibroma, you see that the cell lines that we were looking at were more sensitive as indicated by the yellow than essentially any other inhibitor that we looked at. Here I put in BRAF inhibitors because they're in the same pathway but it even so the MEK inhibitors by just gross ranking were much superior in terms of their overall ability to to inhibit cell growth on the the next, Matthew. One of the things that you can do with these is also to pool the various, maybe before I do this, go back to the previous slide. Thank you. Thank you. And while you can look individually at each one of these drugs in the MEK inhibitor column, for example, another approach is just to pool all the numbers under that. So if you use a pooled mean of the delta S from all the MEK inhibitors, a pooled mean of the delta S from all the MEK inhibitors, you will still get a negative value that is indicative of the entire class of compounds being more sensitive in terms of the plexiform neurofibromas. And you don't see see that for example in in the BRAF inhibitors or to a much lower extent this provides a very facile and quick way of being able to take a quick overview of what's going on without having to resort to looking at individual inhibitors you can look at the entire class, which was done on the next slide, Matthew. Thanks. Before we continue, Hassan, I want to take a pause and just make sure that you're tracking, because this is an important part. Is it clear how we can look at a category of drug type like MEK inhibitor and like make a judgment on it that it is or is not if you know effective at treating something based on looking at its pooled delta S prime value is that do you have any questions on that? Is that clear? I see you as unmuted, but I can't hear you. I see that apparently there's a cutoff for what's more sensitive and more resistant. So like if it's negative, like beyond negative one, then it's sensitive. Yeah, that's right. Yeah. And so that's for a single compound, right? So selumetinib is the one that is approved for the treatment of NF, and surprisingly, it's not that good. It's kind of in this, well, it's sort of, we're predicting it's sort of effective, but we're really saying it's unequivocal. It's not really a big difference between what it's doing to tumor versus non-tumor tissue. But some of these other ones look like really promising candidates, like all these ones up here. Now, that's great when we're talking on a drug-by-drug basis, but now we're going to kind of blur our eyes and say, well, we're just going to look at it from a pathways perspective. So we know that the MEK pathway is affected in NF1 because the RAS is not able to turn off and RAS activates MEK and then MEK does things downstream that are cell growth and cell proliferation. So drugs that are MEK inhibitors we would predict to be helpful because we're turning off something that is there's too much signaling for. So we would predict that mek inhibitors uh would be good at helping with this type of um signaling disorder and in fact that is in fact what we're finding um and so now we're kind of instead of talking about it from a compound by compound basis now we're kind of looking at it from a pathways perspective where we're looking at these categories of compounds and then there's this single value here which again yeah if it's negative it's it's more sensitive um that we're using for that so is is that clear do you have any questions on that yeah so what's the purpose of the pooled mean delta S prime well let's let's suppose that let's take a hypothetical you're an investigator that is working in this general area and you have a compound that is brand new. And you know that it affects MEK, but you don't have a lot of data on it. So if you have an MEK inhibitor and you look at this data and you look at the pooled mean, you can project that your compound, if it is an MEK inhibitor, is likely to have an effect on this pathway and be potentially effective in the treatment of plexiform neurofibroma. That's just one example that we can come up with very quickly. So what we're going to do is generalize this over different types of compounds? We'll generalize it from one class of compounds to another class of compounds so that we can look collectively at BRAF inhibitors or in this case MEK inhibitors. or in this case, MEK inhibitors. But we can do that if you go on to the next slide, Matthew, for a range of other classes of inhibitors. So those specifically targeting AKT1 or BRAF or epithelial growth factor receptor or anywhere down that list. So these are now pooled means. And you can look down the list and you can say, well, the one that is targeting the gene that is responsible for MEK, and that's MAP2K1. That is the gene that makes MEK. None of the rest have as promising a value. And all of these targets would be expected to be responsive to some extent in the overall pathway. So why is the value of negative 2.6 more promising than the others? Is it like, is there like a sweet spot? Because it's like negative one too big and 3.7. Like, how do you like, I don't understand the color scale. The color scale is just, is based on the previous slide. So again, we're looking at values that are range in activity for the MEK inhibitors from one extreme of minus 7.8 down to 0.4. Yeah, let me take a chance here on this one. So one of the intuitions to remember about the delta S indices, whether it's delta S or delta S prime, is that you're doing reference minus test. So you have non-tumor tissue signal minus tumor signal. And what you want is a drug that is preferential for the tumors and leaving the non-tumor tissue alone, essentially. So you would want things that are more negative. You would want a drug that's targeting the tumors but leaving the non-tumor tissue alone and if you can have a bigger negative then you're saying that there's more of an impact on the tumor than on the normal normal cell that's a good thing like you want to really preferentially target tumors so um when we're saying that we're we're color coding the negatives, it's because we're saying the magnitude matters and the direction matters. Go ahead, Hassan. So SREF stands for the non-tumor tissues, the ones we want to protect. Oh, so now the more negative, the better, basically? Yeah. The more negative, the better, basically? Yeah. The more negative, the better. Okay. And I'm just starting to connect the dots. And like the log EFE over AC50, what is that again? This is just a mathematical transformation of this minus. These minuses are exactly the same meaning as this. So you can actually just at this point ignore the bottom one. It's not super important, but it means mathematically the exact same thing as log ref minus log test. It's just some log math tricks. And remember that the EFF and the AC50 are terms that we derive from the dose response curve oh yeah yeah so eff is then equal to the difference between the the two asymptotes the high one and the low one so a minus d and c in the middle here is the ac 50. Okay. All right. Thank you. And then going back to this color coding now, so go to make that connection. So we do want the highest magnitude negative, right? There is meaning there. There is also meaning in the highest magnitude positive, but it's, it's like the opposite of what we want. There's something opposite going on. So scientifically, it's intriguing. Why is there more happening in the normal tissue than in the tumors? So it's good for investigation, but it's not necessarily going to be the compound you want to prescribe. It's doing the opposite of what you want. But you can study it to find out more information. But as far as like drug ability, as far as like what might actually solve the problem, this bottom one. Okay. Any other questions, Hasan? No, I'm good. Okay. Paul? Okay okay on the next slide okay so you can take this same approach uh and sort of expand it and that's what was done here to look at a wider range of genes and uh you know i don't want to really spend a lot of time here other than to say that you can expand it. Instead of looking at just a handful of genes, you can look at a much larger number of genes. And the value that what I'd like to bring your attention to is the one value in green. That's the same one that we were looking at before. And you can see just by scanning the overall grid that there was really only two values that were higher than that, or maybe three. two values that were higher than that, or maybe three. But for the most part, the overall sensitivity of this target is fairly distinct. But this gives a way of us to be able in just a you know in a if you're interested in a very quick overview of what's going on by individual gene you can you can do that and still retain the underlying drug specificity which you can go into if you have an interest. This is just another layer that we can put on for the convenience of the individual investigator that may want to be looking at the data or inputting data for analysis. On the next slide, Matthew. So we have been thinking that we want to be able to take the S indices and be able to make them generally applicable not only to just the plexiform neurofibroma lines but as many other cases that might be of interest. But we have a foundation a foundation in looking at NF1 genes and the difference between reference lines that have no mutation in the NF1 genes and ones that do, which is the basic difference between the last set of studies with neurofibroma lines and the test lines. the the last set of studies with neurofibroma lines and the test lines we hypothesized that other types of tumor cells that contained nf1 mutated genes might be we might be able to detect changes in drug sensitivity in a similar way but we would be using a slightly different way of but we would be using a slightly different way of analyzing the data. Because we have no normal tissue to look at, we would have to look at something slightly different. So the data that we're going to be discussing from here on is based on drug response data that we found in DepMap, which is a data portal for cancer. And in particular, we're looking at one data source, and that's indicated in the CSV file underneath that. And Matthew is just typing in a source that you can look at specifically to find that CVS file. So if we just continue with that, I'd like to just show you a little bit of that file. And this is the, this actually comes from that depth map, a CVS file, CSV file. And of interest to us was the fact that there is an upper limit, a lower limit in yellow, and in green, an EC50. These are the key terms that we use in determining S values. Now what they have done with what these investigators have done is normalize the data so that the upper limit always is normalized to one. So what they're making it, the range here is the upper limit or the upper asymptote is always fixed to one. The lower asymptote can then descend under ideal conditions it would be to zero but it can go actually below that the ac50 then has the same equivalency as as what we've been calling ac50 the uh there's some interesting points here and I'd like to just show you that if you look at the upper and lower limit and sort of the slightly orange lines here, you can see that if you start at one, the numbers actually go up for the first two values. They're going from one to 2.7, 2.5. What's happening here is that the cell number is increasing, those curves are going down. In light blue, you see starting values or starting asymptotes of one and ending asymptotes essentially at one as well. So those are just flat lines there was essentially no effect so then if we we look at the total number of positive and negative values in the in this entire uh data set uh the number of positive values that I'm calling greater than zero or less than zero were quite heavily skewed so that there's almost something that approaches a 50-50 split between the curves that are going up and the curves that are going down. between the curves that are going up and the curves that are going down. And classically, one would not be able to analyze these data unless they have true inhibitory curves. And you can see, if you go to the IC50 column, you just see lots of numbers not available. IC50 column, you just see lots of numbers not available. And there's only two from this group that actually have numbers that follow a classic inhibition curve, and you can figure out what's going on. We're leaving, obviously, a tremendous number of samples on the table that can't be analyzed because they're either going up the wrong way for an inhibitory curve or they're flat. So not only we were having trouble, but lots of other investigators are having trouble with the same problem. Some curves go up and they can't be handled effectively in terms of current data structures. Next, Matthew. Okay. This is probably overkill, so I'm not going to to a variety of different cancer cell lines with MEK inhibitors and rank them. So we looked at pancreatic cancers, lung cancers, cancers of the central nervous system, breast cancers, and colorectal cancers. And you can then break them down by their gene product, whether they are NF1 competent or NF1 compromised. That's the difference between the top and bottom. So that's an easy way for us to be able to look at it. And again, I don't want to spend a lot of time on this because there's just a lot of information here. On the next slide. Before we continue, I just wanted to go back. One of the things that we're starting to do, and this is part of that generalization that we're talking about things that we're starting to do, and this is part of that generalization that we're talking about, is we're starting to look across different cancer types. You see here, pancreas, lung, central nervous system, breast, and colorectal. These are ones we suspect may be related to NF1 in some way, but it doesn't matter. The point is that we're generalizing to other cancer types. And we actually have that in the depth map data. It's right here. If we see this CLE underscore name, the CLE is the first part before the underscore, and that's the cell line. But what comes after it is the tissue where it comes from. This actually, it's abbreviated in the image, but it says prostate. So this would be the prostate cancers that these data samples are for and we have that for all kinds of tissues lung and all these other ones so we have the data there to be able to say this is lung cancer this is colon cancer this is prostate cancer um and and we also know um because of the analysis that that that we are able to do on the target data, which is over here after mechanism of action, it's not shown, but we know the target for each of these compounds. So we're able to say, okay, well, drug target like MEK and cancer type. So we were creating a matrix of data and there's that is a huge matrix that we could potentially generalize in DetMap. A matrix of all possible genes and all possible cancer types and then within those you know doing our our delta s algorithm so that's we you don't have to understand that very deeply right now but there's just this idea that you know we can take the work we've done and and broaden it by looking at cancer types and additional types of compound classifications and. And gene classifications. Yeah, gene classifications. Let's skip this one, Matthew. Okay. So one of the things that's very important for us to be able to establish is that, and one of the questions that will come up is, is your evaluation any better or worse than what's currently available? So there are two systems that are widely used, one of which is looking at the AC50, and the other is the area under the curve. And specifically what you're looking at when you're talking about relative differences is the change between each. So it would be a change in the AC50 from one condition to another condition or the change in area under the curve from one condition to another. addition to another. So this was put in to show that as something similar to what we provided in the first study, that what the delta S or in this particular case, the delta S prime is looking at is different statistically from the difference that you see in EC50 values, which is located in the second over from the right-hand side, or the delta AUC, which is right next to it. And this was established using a Pearson correlation coefficient, where you're looking at a correlation fit of going from 1 to 0. A correlation of 1 would be a perfect fit. A correlation of 0 is be a perfect fit. A correlation of zero is a totally random fit. And the point of this is that the Delta S is looking at a distinct, a distinctly different endpoint than either the EC50 or the area under the curve. And it is more closely aligned with the EFF, which is the difference between the two asymptotes. That is to say, its effectiveness. And this is important and an area that we will be building out because what this will provide is something that addresses the known deficiencies in AC50 and area under the curve evaluations. They're not very predictive right now. We believe that using the Delta S values, you can get a much higher and much better predictive outcome. And we will be moving to address this because it will provide not only us, but other investigators a bit more confidence in what these values are actually doing and what the high throughput screening is actually meaning in terms of outcome. Okay, next. Okay. We spent a lot of time looking at MEK inhibitors. This is just to show that you can take the same approach and not only look at MEK inhibitors, but you can expand it to a variety of others depending on on individual cancer type so the cancer types that we were looking at were pretty much the same or are the same but what we've done here is just change the gene target so we're looking at not only at MEK inhibitors, but PIK3, which is another gene, AKT, yet another gene, and P38. All of these represent different pathway alternatives. And there is stratification here. So this builds into an overall view that we can go back then to relook at and stratify what's affecting in the various pathways, which is going to be a future focus of some of this work. Thank you. One thing to make a connection here, Hasan, because A, this is a net new data set, and B, we're trying to generalize across it. It's a massive data set. I mean, we're just looking at four things here, MEK, PIK3, AKT, P38. data set i mean we're just looking at four things here mek pik3 aktp uh 38 we want our web application to be able to have those um these uh genes as drop downs in a filter right so like instead of paul having to go into excel and manually craft these complicated analyses that's what we want the web application to do. We want the web application to be able to have a set of cancers. Maybe we could use these ones or we could have a bigger one, but be able to select which of these genes a researcher wants to focus on. a researcher wants to focus on. And so this, what Paul is proposing, becomes the foundation for an interactive web tool meant for investigators. And so that's a lot of the work that we're gonna wanna work on. That's part of the generalizing it is. We'll go back over what Paul had proposed slower and work with you on the code part of it. But the goal is to have something interactive that investigators can kind of, they want to look at what they want to look at, but having it in a snapshot, a visual chart, even with the color coding that can let them look at any combination of cancer type and compound category that they want to look at. So that'll be some of the superpowers that we're giving the researchers coming up. All right, Paul. One thing I'd like to bring your attention to, Hassan, is the color code down at the bottom, which is yellow, gray, and sort of an off red. code. So you can call sensitive things one, things that are equivocal zero, and things that are resistant as as minus one, for example, let me turn that around the sensitive as as minus one, the equivocal is zero, and the resistant as one. So we so we keep the same numerical structure. So the negative numbers are are something we want and beneficial. So on the next slide, you can take and use that same numerical system and using the same data, just transformed into a slightly different way, you can now graph the data looking at the same different way. You can now graph the data looking at the same inhibitors or inhibitor classes that we were looking at before and align them by cancer type. So that the dark blue here, all those bars reflect pancreatic cancers. All the red bars indicate breast cancers, for example. But this is just, there are many ways to different, to capture the data and present it. the data and present it. And this is one of the dilemmas that we're having right now is how best to present the data when you're talking about extremely large sets of data and how to try to capsulize it for the viewer to make sense of it. And that will be a continuing focus as well as we move forward. Okay. And I think we're second to the end here. So if we go on to the next slide, Matthew. Next slide, Matthew. Just before we proceed, in case it wasn't clear, the magnitude of these bars is the count of the occurrence of something being in one of these categories of sensitive, equivocal or resistant. Is that right, Paul? Yes. Okay. So that's how we're kind of, if we ignore the actual number values and just kind of categorize them, that's how we're making kind of broad categorical assessments saying that, well, in breast cancer, MEKs have this strong negativity, but not pancreatic cancer. So we can make a differential determination on what treatment should be based on whether it's pancreas or breast and say, well, don't use MEK inhibitors in pancreatic cancer. They're not going to do what you think they should, but do use them in the MEK. So that is, you know, it goes back to what the doctors are able to prescribe a patient because most times when you have cancer, these days, they're actually going to sample the cancer and do a genetic profile so they'll know which mutations you have and this this is all just for the nf1 mutation scenario so if you have the nf1 mutation you're going to get something with breast cancer that you're not going to want to have if you have the nf1 mutation and pancreas pancreatic cancer so that's some of the you know as far as like the impact we're making for people this is this and pancreas, pancreatic cancer. So that's some of the, you know, as far as like the impact we're making for people, this is an important slide because it really puts together what a doctor might be interested in. If they know what a patient's genetic profile is for their tumors, it will affect their treatment options that they should be prescribing. Do you have any questions at this point, Hassan? No, it's pretty clear. OK, so just based on what Matthew was saying, you can then bring this back to just looking at the mean delta S values, knowing that based on the previous slides, there's some cancers that are preferentially sensitive to MEK inhibitors as a class. So returning to the S prime inhibitors, you look at their overall inhibition rate, and you can select a single chemical or a single drug that would be the most efficient in terms of overall kill rate. overall kill rate. So that provides, again, additional information from a therapeutic perspective, as well as it provides information from a drug development point of view, because you're balancing then projected outcome for an individual drug, as well as in its target class. And that's very useful information for selecting drugs that you want to move forward toward clinical trials. And I think there's just one more slide, Matthew. And I think there's just one more slide, Matthew. We do have some long range goals that we're not working on robustly at the moment. But those involve trying to do combinations of drug evaluations. In most current, in many cases where people actually have cancers now, they frequently are getting not one drug, but two or in sometimes cases, more drugs at a given time or sequence. And this is to help provide, to address different pathways and kill mechanisms. So with the long range goal of trying to totally eliminate the cancers themselves. But this combination drug evaluation is a daunting problem and has been for many years because there's many interactive pathways. And one of the reasons that we've been spending a lot of time on talking about pathways is because when we get to combination drug evaluation, those are going to play a dominant role. And ultimately, we think that with a lot of work, where we want to be able to get is to evaluations via neural network modeling that can save us all a lot of time in terms of trying to make projections based on just the preclinical data that goes into the dose response curves. Matthew? Matthew Kuehnert Thank you. So at this time, I'm going to turn off recording. Thank you so much for helping us get this together, Hasan. It's very helpful. So let me go ahead and get that off. Hasan Qureshi Thank you, Paul and Matthew.
delta S' and Future Work - Computational Biology - Part 3
3,328
MoCo Makers Group
20240606
2024-07-29T10:12:57.595666
https://www.youtube.com/watch?v=DoCUE89MpYk
Recording is on. Hopefully this will be very useful for the whole group. We're looking to sharing this video and these slides out. And this will be a good foundation for everybody to kind of get caught up on the biology and also some of the direction that we have for how the algorithm is evolving. So historically, again, we've looked at high throughput screening data. This is the literal data from our original data set. So this is the MIP 3.0 data set that we currently have on our web application. You'll see that it goes from 0 to 10 data points, and those are activity levels. And that for each of those activity levels, we also have a concentration. That's the C values here. And there's one drug per entry, and a drug can occur multiple times based on if it's a different cell line that's being tested. So, again, we looked at multiple cell lines. Mostly, most of them were the test lines, which were the benign tumors. And those all looked like this, 5.5, 6.2, 9.5.11b, 9.5.6. We had one that was non-tumor and also from an NF1 patient, which was this plus minus, and it was this IPN NF9511C. And so hopefully with more exposure and familiarity, you'll start to be able to eyeball and identify this one as the reference. So generally, there's one reference and four test lines in what we ended up publishing with. There was more in the raw data. We just didn't use a lot of the raw data, either because they were largely because the reference lines weren't as didn't have as much information in them when if we were to use them as the reference or for the test lines there's actually one that had like a duplicate that just wasn't very helpful for us so we just discarded it. So we talked about this being a dose-response curve, and then the technical term here being four-parameter regression model. And the parameters of the regression model are, of course, A, D, C. And then technically there's a B in there for actually creating the sigmoidal shape. And then technically there's a B in there for actually creating the sigmoidal shape. And we're using, we're actually, we didn't actually end up using this literal formula directly. Behind the scenes, we had a program that Paul has that was doing this for us when we were creating these charts. creating these these charts um but we we our our algorithm of delta s or delta s prime really is only using a c and d um all right so we talked about the there are different naming conventions ec50 ac50 effective concentration active concentration. IC is really only relevant for inhibitory things, so inhibitory concentration. So only when the curve goes down would we use that. Largely, these are interchangeable for our purposes. So if you see EC50, AC50, for us, largely it's interchangeable. All right, so we now start to define something new. This is based on our algorithm and our approach, which is the term effectiveness. We are defining effectiveness as the height difference between these two. So the A is the asymptote extrapolated to zero drug concentration, D is the asymptote at the infinite drug concentration, and we are finding the size of this and calling that effectiveness. You'll note that this will generate a negative number if the curve goes up. Now it really does depend on the response that you're getting. So remember that in the high throughput screen we're giving a chemotherapy, what we expect to have happen is cell death that's that's that would be less luminance less activity you know you you give more drug more of it dies that would be what you would think um in some situations for some of the compounds the there was actually cell proliferation right and so in the curve actually went up we didn't we didn't use those drugs in our original study because they didn't they didn't work for treatment and you know we we kind of filtered them out but it was it was it was a it was very interesting and notable that there are things that that don't behave as you would expect. Now that's still interesting science, right? Like you want to know why is it proliferating? What's going on here? But from a treatment perspective, it wasn't relevant for treatments exactly, not directly at least. And well, I guess what we're calling out here is that there's another name for this effectiveness you'll sometimes see, which is amax is that right paul yes okay so you'll see that in some literature some papers i'm not super relevant some of our some of our data uh uh files will have that as well as we move forward okay that's good yeah one of the things that we're trying to do is generalize to other data sets and adapting to their terminology is is actively part of the challenge especially for us coders who have to deal with the raw data all right so relative activity um historically has been based on the difference between ac15 area under the curve. Actually, Paul, I'm going to let you take this slide if you could, please. Sure. Drug screens have been used for a fairly long time going back to about the 50s. And the mainstay in terms of trying to compare one drug activity against another has been either a comparison of the AC 50 which reflects potency or the area under the curve the area under the curve is a more total effect of the drug. And it is indirectly a measure of effectiveness. But both of these methods have been known for many years to have issues in terms of their predictability. They are incomplete in terms of their predictability. They are incomplete in terms of their ability to predict what's going to happen in a, especially as it applies to high throughput screening. So there is known to be a need for something that improves the predictability especially when you think of you're talking about thousands of drugs and if you have large errors in your predictability the utility of those screens goes way down so we're hoping to address that in with some of the work that we're doing. On the next slide. Okay. All of what we're doing is based on dose response curves. But if you just take a look at the various dose response curves, you can see that just trying to get an understanding of what's going on by the various curves is really difficult trying to assess where the the ec50s are even just visually looking at them it's it's hard to appreciate exactly what's going on part of the problem is that the curves are not always classically sigmoidal they have kind of funny shapes uh the effectiveness that is the height of the the difference between the the upper asymptote and the lower asymptote is frequently not full scale that becomes a difficulty in terms of interpretation, especially as it applies to the EC50 values. What happens there is you get a skewing upwards of the EC50 values, and that introduces a lot of analytical ambiguity, because instead of looking at changes in one dimension, you're looking at changes in two dimensions. one dimension you're looking at changes in two dimensions. So just based on this I hope you can appreciate that you know there really is a lot of funkiness that's going on and there is a need for wanting to change to something that's a little bit more effective on the next slide yeah right before we move on I just wanted because this is a really important for the rationale for why Delta s prime in particular but Delta s as well you see how the the curve with the white dots is not a full sigmoid is is that clear to both hassan uh is that clear to you yeah it's like you can't describe it with a polynomial is that what you're saying well it it's it's just not the full model right our model is for the full sigmoid and this is like less than the full sigmoid it's maybe like half yeah it's truncated on both ends yeah like the wired dots the white dots one yeah and and so what what that means is that we don't really know where the ac50 is right and and that you know if we we can make a guess the algorithm can do do an interpretation but it's it's challenging right like we know that it's probably off and it would be better in the ideal world, they would have had more concentration samples and you would have just seen more of it. But that's just not always the case with practical data. Now, the other one that's a weird shape is this all black dots one where it just, it's hard to call that a sigmoid. I mean, it just slams down. And so it just, it has a different type of response than what you would typically expect a very sensitive and narrow response. And, you know, there just wasn't good sampling, right when the key activity change was happening. So that's also a morphing. That's a challenge, right? It means that it's not following the sigmoid model that we're hoping and there's a deviation so it just means in both both of those cases that's going to be harder to interpret that data so that's really important because what we what we're proposing coming up is that we have our approach is a little bit better at working with those styles of data than the area under the curve and the traditional AC50 style. So this is basically the main point of the research. It's one of the main reasons why we think we're better. Yeah, just to put it like bluntly, like it's one of the reasons because we can deal with more practical, realistic data. It's more than that, though. We can also do cross-cell line analysis. But just at the most simplest level, yes, ours is better than what they're doing in industry right now. Because in industry right now, they're not compensating for the fact that sometimes the data is not complete or it's just not showing a good dose response curve they're kind of assuming an ideal the the biggest problem right now is not taking into account that the the drugs are that the drugs are not having a full scale effect. So instead of going from 100 down to zero, many drugs will have something much less than that. So they plateau at the lower end at 50, or in this case, and the one Matthew was showing that has a very sharp drop off it's 75 percent more or less so you're only getting a a change a net change of 25 percent in terms of the kill rate and if you don't take that into consideration uh all of your estimates about what's good or effective are can become less than accurate. That's very critical. All right. Thanks, Paul. Did you want to keep going or did you want me to continue? I think normally you take over here so you can just call me if you need something. Okay. All right. So now we're going to start going down the train of thought on what was our algorithm? What did we do different in our original paper, the delta S? First of all, we are introducing a combined value, which is a single number. For me, S stands for score because I was always thinking of turning this into a ranking of which compounds might be more relevant. But it's a single value, so however you want to think about it. And it combines the effectiveness, which we defined before, which is the height of the asymptote, and what we're calling potency, which is that AC50 value. And it's combining it here as a fraction and creating a single value S. This approach comes from receptor ligand pharmacology which just means the part of pharmacology that relates to how things are binding and it can be used to rank the inhibitory effect of compounds I know Paul and I go back and forth on the role of S. I'm much more interested in delta S, but S by itself does allow you to look at a compound and say, for this compound, what is its overall single value effectiveness and potency? If you just want to look at it really quickly and judge it, it's very effective. So this is that approach of a single value S ranking. And what we have here is all of every single compound from our data set was plotted with its S value. And so this is just all 1,900 compounds just plotted. And, you know, by coincidence, it happens to have a decent curve. Paul, I'm going to let you kind of go into the explanation a little bit. Okay. So when we do this, part of the reason that we're looking at transforming everything into a log is because the numbers actually cover a wide range of values, many magnitudes of difference. is due to the the change in the AC 50 which can vary quite quite a bit yeah one thing to make clear actually going back to concentrations when when when scientists use concentrations they tend to use uh exponentially more concentration right so uh this this is actually a log concentration down here um so you know micromolar millimolar molar you know like they're dramatically changing the amount of concentration so So they never really use a linear scale. And we can actually see that if we go back for a second to our concentration data, we're going from this 0, 0, 0, 7, 8, 9 to 0.2 here. And then we're all the way up to like 1, 5, and 15, and 46, right? So the concentrations are, you know know logarithmic mean more and and that's because for you to see a larger impact a lot of times you need exponentially more compound to see that effect so that's why when we say that the AC 50 varies dramatically, what we're saying is the concentration range varies dramatically. And that's why looking at it in log space, which will kind of flatten it a little bit, is going to help us. And it makes everything a little bit, certainly much easier to graph. And from just a mathematical perspective, it makes it a little bit easier to understand because you're talking about values that are relatively restricted in numerical space. So again, the S value we're defining as being the log of the effectiveness over the AC50. And this by itself monitors the inhibitory effective activity of a given compound in a single cell line. And nearly all of these values are going to be positive. And what that indicates is that the compounds are having an inhibitory effect. I want to point out here that all of our numbers are based on log and log values have certain limitations. And one of the key limitations is that you can't take the log of a negative number. And that will become more important as we start to make a transition over to S prime. But from a practical point of view, it means that if the curve is going up instead of down in terms of its concentration and response and then we can't analyze it using just s alone it using just S alone. But you can use it for all compounds that are inhibitory, and you can use it to rank cells and compound responses. So from just selecting something that is going to be beneficial in terms of at least one cell line, you can make a nice ranking and apply statistics to that to make sure that the, for example, the 95% confidence intervals are separated from each other. And that gives confidence in terms of practicality of separation. On the next, Matthew. OK. So that leads us to now looking at the differences between cell lines. As I mentioned before, S is good at looking at relative compound efficiencies in a single cell line, but very frequently what is even more important than that is looking at the differences between a non-tumor situation and a tumor line and that is particularly important when we're talking about the nf1 syndrome and the cell lines that we get from them the nf1 syndrome is non-tumor The NF1 syndrome is non-tumor. Any patient that comes in with the NF1 syndrome, most of their body is non-tumor. And you want to treat them, and you're targeting just the tumor cell lines, the tumors that are in that patient. So you want to be able to get a differential effect where you're treating the tumor cell lines and killing those off selectively and sparing the normal tissue. The particular caveat here is that this treatment is not short term. These patients have a genetic disease, so they're going to have this situation where you have tumor versus non-tumor for the balance of their life. So the treatments have to be designed to maximize the tumor to non-tumor ratio because they're on lifetime therapies. That's very different from, for example, treating a malignant cancer, where the treatment is brutal, but it's relatively short-term. And it's designed to kill the tumors and not kill too many of the normal cells. So the patient survives, even though maybe his hair falls out, he has stomach issues, but there is survival. This is a little bit different. What here we're monitoring is the change, which we call delta, we're monitoring is for is the change which we call delta between a reference set of lines whatever we define as a reference and whatever we define as our test lines in this case particular case it's tumor lines so delta s then is the change in S going from one set of conditions to another, tumor versus non-tumor. This allows us to do comparisons across cell lines and different compounds relative to some whatever we end up calling the reference lines. So we can use this type of Delta S anytime we want to look at a difference between cells that are of connective tissue origin, for example, and those that are of epithelial origin. that have one set of gene features nf1 positive for example versus ones that are nf1 negative again as an example and in the first set of papers that we did this is the approach that we use that we use s uh and s prime as our two indices to look at the effectiveness of drugs across uh both non-tumor and tumor lines in our first paper and that's delta s yeah. Thank you. So just wanted to go back for a second for a side discussion, which is also on the impact of units. It is it's been a criticism internally, that, you know, we want to be very cautious with these indices. One thing to keep in mind is if you look at the units of EFF over AC50, AC50 is a concentration. EFF is a percentage, generally speaking, so it's kind of unitless. But we do have at least the AC50 units. What that means is that because this is a log scale, units have an outsized of impact right so one thing to be very aware of is when you're doing this kind of ranking it only really works when the units are consistent across all the samples you can't mix micro molar and molar concentrations you have to have consistent units because the they will skew the log value very dramatically. Small changes can have really big impacts in log space. So that's one thing to keep in mind. One of the reasons I'm also particularly a fan of the delta S is because it helps us deal with a lot of the units issues. You still have to be consistent, but at least theoretically, you can see how because of the way log math works, this subtraction of the reference minus test log spaces is equivalent to this kind of ratio. So when you start looking at the code, we use the term ratios a lot. And the reason we use the term ratios is because ratios also represent this basic delta S subtraction. So it's important to know that A, there is mathematical equivalency between this kind of subtraction and the ratios. But also, if you're worried about the units like I am, it becomes less mathematically relevant once you have this kind of ratios here, where obviously... Right, because the ratio, because the units cancel out. Right. So that's one of the reasons delta S is also, at least mathematically, very reassuring, at least by the time you get to that point. Just to make one thing clear, in case it wasn't, the reference line is the non-tumor tissue, and the test line generally is the tumor tissue. Before we proceed, are there any questions, Hassan, so far? No, I'm good. You can go on. Okay. All right. Now, when we did our delta S, obviously, you need two cell lines. You need a reference and a test line. And if you recall, our original data set had one reference and four test lines. One of the things that we were able to do to improve the predictability of our algorithm is we created a delta S for every single test line. So we kept the reference line constant. We use the same one over and over again, but we changed the test lines to other tumors, right? Other plexiform neurofibroma tumors. And then we took a mean of that. So we ended up actually moving from just a single delta S, which is one reference, one test, to a mean delta S, which was a single reference in four test lines. And that's what we ended up introducing here is this delta S log mean, which covers at least four of the entries, assuming that they all had valid data. If one of the sigmoid curves was really broken like it was not sigmoidal we would generally toss out that cell line and we had a criteria in our paper that we needed at least three out of four to be in good shape for us to to analyze it so we could still create a mean with three out of four or four 4 out of 4, ideally. If it was less than that, it wasn't something we could analyze, and we would just discard that compound. So what this shows, again, is a list of all of our compounds, all 1,900 compounds. And now we're looking at the the Delta s log mean and what you see is that there is there's some things that are above zero some things that are below zero and there's a bunch of years at zero so if you think about this base of subtraction if you have a zero response, it means that there was no difference between the tumor and non-tumor tissue. Whatever they did, they did them exactly the same together. There was no relative difference. And what I was particularly more interested in was the drug sensitivity stuff, because what that is saying is that there was more of a difference in the tumor line than in the non-tumor line, the test line versus the reference. And in those cases, you're getting a negative delta S or negative delta S mean. So looking at this curve in particular, we were able to kind of make a claim or make a justification that at 0.5 and up, we're calling all of those compounds drug resistant. the effect in the test line in the tumor lines was less than the reference was. And then at this negative 0.5, we're saying it's drug sensitive, like we're categorizing these as drug-sensitive compounds, meaning that the test line had more of an effect than the tumor line. We're seeing a difference. And then for the zeros, I don't think we gave it a label, but it's basically undifferentiated, I guess is what we're calling it. There was no big difference. Is there anything else, Paul, you wanted to call out here? No, other than to say that these ultimately were changed into what you see on some of the yeah so we then now having calculated delta s uh we then looked at this from a kind of a pathways perspective or maybe a mechanism perspective and what we had done in our original paper is paul was able to manually curate the targets for those compounds into categories, like which pathway does this affect. So this is, again, the PIK3 mTOR pathway, which is one of the ones that we called out before. There are certain ones involved in the cell membrane, certain ones in the DNA cell cycle, and there were some wild cards down here we were very interested in as well. But we were able to bucket these and kind of look at them as an aggregate and look and say, what does this class of compounds do? So for everything affecting that PI3K mTOR pathway, what does it do and also within a pathway which ones were the strongest signal so the size of the bars are showing how much of a signal how much of a Delta s there was and the Reds are all the drug sensitive compounds and you can see those values here in the delta S log mean column. So we're ranking them by this delta S log mean and how many of the cell lines had good values. Many had four out of four, like we see with this chlamypramine up here. And then you see that there are cell lines that are more responsive and less responsive. And this is going to be true with real world patients as well. So this cell line here was very responsive to treatment. This one, 6-2, was a little bit of a problem child. It responded often, but not always. But then this 11B and 95.6, it was decently responsive. So this is some of the advantage that we had, especially in the structure of the original high-throughput drug screen, where there was a plurality of cell lines tested. And that was really advantageous to us because we could make a claim that, look, this is what you would see across a set of people right you have different mutations different profiles for the genetics but on average we're still seeing something nice and ideally you know from a from a research perspective the more cell lines that can be screened, the better, right? Like we wish our log mean was, you know, had hundreds or thousands of cell lines. That would be great. It's very expensive to run high throughput screening, but there's this idea that as time goes on, you can add more and more of these cell lines and then the log mean value becomes more and more accurate. And we're very excited about that. We hope to have more certainty the more data sets we encounter and the more studies are run. But there's this idea that we are creating these pools or functional domains and we're able to kind of assess things as a pool. Now, this is drug sensitive. It is fair to say that drug resistant is interesting scientifically. It's not what you'd necessarily prescribe directly because obviously it's not preferentially killing the tumor. But if you want to understand the pathway involved, if you're trying to discover the pathway and what are all the implications, you know, there is useful information here. Like, why? Why are these ones that affect the tub and the TOP2A? Why are they the way they are? You know, it's very useful. And then Paul had mentioned that there is a nuanced use case where sometimes you're giving somebody uh an agonist which you know is promoting something like maybe cell death but you're also going to give them the opposite inhibitor in a small concentration to kind of dial in an effect paul how would you describe that well no you got it right on point. Okay. So, yeah, you would, you, even though this is not the primary thing you want for killing your tumors, it still might be part of a combination therapy, or at least you'd want to study it scientifically to know why these proteins and genes are involved at all. It was also very helpful in terms of us being able to validate that the drugs were doing what we expected them to do. Validation is a very important component of making an algorithm like this. I mean, algorithms are very cheap, but getting ones that are predictive and having numbers that make sense according to established biology, that's a little bit harder and so uh part of this exercise was to make sure that we were making uh we're having outcomes that were entirely or at least largely consistent with known biology both from a drug sensitive perspective as well as a drug-resistant perspective. And then that in turn speaks to the power of the algorithm. Matthew? Matthew Cooke Thank you. All right, let's continue. So this side is focusing and narrowing in. For each of those categories, what were the top two most drug sensitive compounds? And so if you want to think of this like a doctor, right, like I need to prescribe something, you probably want to target some pathway, right? Like I want to, I, you know, I've already tried an mTOR inhibitor and it didn't do anything. I need an alternative, right? Well, this set of buckets are providing independent alternatives. They're doing different things. They're different parts of the cell cycle. So this is a way of saying these are alternatives, and this hopefully would be very useful for at least the researchers, if not the prescribers at some point, on saying, well, let me see what the top candidates are in these different parts of the pathway. pathway. Again, because of the way genetics works, you know, there's a lot of variability in it. You know, people have mutations in different places. They also have mutations in other parts of their genome that interact with parts of these pathways. So you never know exactly how it's going to play out with a single patient, but you can make some predictions if you have their genetic sequencing, for example, or you just know what you've tried with them before, where you've given them prescribed drugs and saw that there was little to no effect, and then you have to move on to something else. But this is a good way, again, this is a good way of... This can also be... Yeah, go ahead. This can also be very helpful from a drug development perspective because instead of starting now with 1,912 drugs, you can narrow it down to just a handful, maybe 10, that have slightly different mechanisms of action and are candidates for advanced preclinical studies, which are very, very expensive. So it's a way to weed out relatively quickly and with hopefully a degree of accuracy drugs or compounds that can be moved on into advanced preclinical studies. That's really important especially for rare diseases that don't have a lot of attention paid to them. Matthew? Matthew Kuehnert Now, it's fair to say that the science that we're providing should add value to the community. If you look at some of these, some of these are approved drugs for other things, like this one is a statin right here and vertiporphine is known as a photosensitized photosensitizer it these are some of these have been studied so again our original data set was a mix of approved drugs and experimental drugs some of these are approved drugs and they're not ones you would expect like that. We wouldn't really have predicted a statin would have an effect here, but that is something that we found. So not only are we ranking, but we're also bringing to light compounds that researchers just may not be aware of or they wouldn't have thought to look at. And we're giving justification for why they should probably slow down and look at this a little bit closer and so that's part of the value that we're actually providing the research process here is not only we're giving you a ranking but we're finding things that are unexpected and interesting and we can we can ask at least in follow-up studies why or what can we do to validate this oddball effect and then go deeper on it all right so let's summarize a few things this is this is a conclusion of the work we've done on the Delta S uh so the the in the final version of the paper we we actually required four out of four cell lines have good data we decided that that was the most conservative thing we could do because if we had three out of four we would have to introduce some weird statistics uh it just wasn't worth it for us we wanted to be very conservative with any of the claims we were making so we we did end up publishing with just four out of four being required to justify a strong log mean delta S. But internal discussions, we were comfortable with three out of four of the tumor lines having a positive delta S sign. S sign. And one of the details here that we skipped from the statistics, but we're going to bring this up right now, is that when we do the delta S, we are given a score for those dose response concentration data points of how much like a sigmoid is it? How much like our model is it? And this is the R-squared value. It goes from 0 to 1. 1 is a perfect fitting data points that perfectly fit a model. 0 is completely unrelated to a model, completely random scattering. And we set a threshold for our tumor and non-tumor lines of 0.8 or better for how sigmoidal the data is. This is intentional. So one of the things that we factored into making a threshold like this, when you have a drug that you're trying to prescribe, if it has one of these really sharp angles, it's very hard to dose that, right? You can't really prescribe somebody a dosage and expect it to be like 20% down or 30% down. It's just going to make a sudden uncontrollable jump. 20% down or 30% down, it's just going to make a sudden uncontrollable jump. So generally, that's not going to be a good candidate for treatment, because you can't give somebody a pill and know exactly what effect they're going to have. It's just not dosable. And other things like this other one with the white dots where it has you know it's incomplete well that's also going to have a bad r squared value and depending on how bad we may not be able to use it maybe just you know our day the the points that were observed just there was not enough there to extrapolate what the ac50 was or, you know, that we were comfortable calling that a sigmoidal curve. So one of the good things with the sigmoidal curve is that there's this region that is dosable, that has this kind of, you know, gradiation as far as what the effect is for a concentration. Yeah, not all biology does that. some are this snapped shut thing very dramatic change but nonetheless we were preferring ones that were more traditionally pharmacologically relevant which would have that more of a sigmoid curve than not so using the r squared to filter out data did a lot of things for us. Maybe there was sampling error in the original data. Maybe that's why the R squared was bad. Maybe there was incomplete data, poor data coverage, or maybe the shape was just not clinically relevant, right? Just didn't have good dosability. So there are a number of reasons that kind of justify wanting and preferring a better r squared and and 0.8 was kind of our our threshold for that now what that meant um for our analysis depended on which line had it if if if you had a bad r squared on your reference line we just couldn't we couldn't assess anything about that that uh that that compound um because the reference line we only had one of them so if it was bad there's nothing we could do but if it was good and one of the tests was bad that's not a problem right so if three out of the four tests had good r squared values we could still analyze it um we would just yeah we would discard the third the fourth one but we had three out of four that was good enough for us so um that is the r squared value it is very important for the uh for the delta s approach we we will call out and give you a preview that it plays much less of an impact in our S prime algorithm coming up, but it plays a strong impact in our Delta S study. It was a really critical quality control for a whole study. And so when things didn't work, when we did have misses, what were the common reasons? One was the low R squared. As I captured before, that's a lot of reasons, sampling error, or just not the dose range wasn't right. Delta S, however, could, or the S value could be determined to have a negative EFF. And again, there's no way to take a log of a negative EFF of a negative value. So those values are probably discarded as well. And again, this would be the cells grew. So we didn't really care in our original study that we were discarding those because we were just trying to say, these are the things that are more, that are prescribable to kill the tumors. We didn't care about what was causing tumors to grow, but scientifically we would care. We would want to know the mechanisms and pathways involved either way. So that was a limitation of the Delta S that we're going to get rid of going forward. And then of course the AC50 was not always able to be determined maybe the concentration response curve was flat so you imagine that delta S was really really squished and it just almost looks like a flat line there's almost no way to find halfway through a relevant inflection point so we did see a lot of flat responses. And that's a problem. We didn't have a good way to work with data that looked really flat. So again, not super relevant for the drug-sensitive category, but it is relevant for the science and the understanding. So we'll address that going forward as well. understanding. So we'll address that going forward as well. So to qualify what, you know, how all of our filters and how it all kind of played out, out of the 1,912 compounds, 600 passed the drug screens. So obviously a narrow subset, but the upside was that we thought that the data that we were getting was very reliable because we had these filters for quality. And because especially we were doing these averages and such, like the data that when we made a claim saying like these things are more drug sensitive than in tumors than non-tumors, we felt very comfortable saying that. But there's, I think what's been a thorn in our side from a research perspective is, well, that's still a lot of compounds that we're just not, we're screening out early on. We're not doing anything with that data. So that's kind of what's priming us for the future work going into the Delta S prime. So just to give you a preview, I think what we'll do, Paul, if you think we could probably wrap up at this point and then continue, I guess tomorrow, does that work for folks? It's fine with me Hassan sorry would you be comfortable continuing the rest of this tomorrow yeah and also when will we go through the technical part after yeah probably, I mean, we're at 30 out of 50. We should be able to finish up almost all of it. Maybe we can make a push next time to finish all of it. I think probably we can finish in one more skip a lot of the data slides and just primarily focus on the concepts. And that will cut down the total amount of time very substantially. Most of these are data containing slides and they may be of more interest to the biologists. Yeah. Yeah, I think we can hit the last of this in one more meeting. So before we do that, let me just give you a sneak preview of where we're going with Delta S prime. So Delta S prime has a format similar to S. So this is the S prime. And the S prime is also a log, but it's a natural log. And it uses the same input variables, A, D, and C, but it does it in a slightly different way. Now, to give you a sneak preview, this is going to allow us to look at curves both where the A and D goes down but also where A and D goes up. It's also going to allow us to do when A and D are relatively flat and it's hard to calculate C. So it's going to give us a lot more ability to calculate all of those values we were just discarding earlier on, now we're going to have actual values for it. So we'll go into it a little bit different, a little bit more coming up, but a lot of this is very one-to-one with our approach for delta S. We have an S and then a delta S, we have an S prime and then a delta S prime. So we'll get into that more coming up. But this that we're starting with the delta S prime is all of the new work. This is where you will be coming on to the project. Everything else up to this point is background. But it's background that very directly affects our approach for the S prime. So yeah, why don't we hop on a call tomorrow at 1130 and we'll keep going with it. All right, sounds great. Thank you for your time. All right, thank you. Thanks Paul. All right, bye bye. Bye.
Historical Work on delta S - Computational Biology - Part 2
3,221
MoCo Makers Group
20240604
2024-07-29T11:17:50.560639