Scheduled Commit
Browse files- train/transcriptions-14b342ee-8b0c-4629-8212-36606fede94b.json +0 -0
- train/transcriptions-3fe271ac-908d-47be-b08b-cd359cb716d6.json +2 -0
- train/transcriptions-5760b78f-110c-4cbe-ba2a-f03efaa19339.json +1 -0
- train/transcriptions-85f73827-bedb-44b6-a98d-c30a960b96c9.json +1 -0
- train/transcriptions-aa17bcdb-6deb-4f6f-8490-a6d37bceb350.json +1 -0
- train/transcriptions-b69a3458-f38f-4a31-84e8-ea62708adce8.json +1 -0
- train/transcriptions-f952d775-1e9c-4676-9254-510305708f0e.json +1 -0
- train/transcriptions-fa8b13f0-6440-4646-8d0c-cd15cf6d3679.json +1 -0
- train/transcriptions.json +0 -0
train/transcriptions-14b342ee-8b0c-4629-8212-36606fede94b.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
train/transcriptions-3fe271ac-908d-47be-b08b-cd359cb716d6.json
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/watch?v=KuAn6Fy9UX4", "transcription": " Welcome back to session eight. This is on embedding, fine tuning, and we're going to go ahead and see how we can do this in a tool like Lama Index. Now, this is bringing a couple of things together. We want to align ourselves to everything that we're doing here. We're going to do a quick review, review Lama Index, and then we're doing here. We're going to do a quick review, review Llama Index, and then we're going to build a veterinary camelid index. We're going to build a llama index with Llama Index. All right, you guys ready for this? Then we're going to fine tune it because vets say some crazy words. Remember, why rag? Well, because LLMs lie to us, because we need to fact check them, because we need references, and we need to sprinkle some references into our prompts. But also remember that we talked about how important it is when you have very specialized domains with very specialized language that you may want to consider that as soon as you build up a RAG system or in the process of building up your first RAG system, you consider fine-tuning embedding models. That's what we're going to do here today. Because the language is so specialized, it's just not anything anybody would ever say to anyone and you would randomly find on the internet. So let's take a look here. During this RAG system, which of course looks like this, it combines dense vector retrieval and in-context learning. We're going to leverage Lama Index to build this thing. And we're also going to use Lama Index to fine-tune our embedding model. So recall, Lama Index is a data framework. It's all about that data. Lama Index uses nodes as NLP documents and PDF documents, aka documents as source documents. Those first-class citizens are just chunks of source docs. They're called nodes. The parsers allow us to take the docs and then chunk the docs. They're called nodes. The parsers allow us to take the docs and then chunk the docs and create those nodes. Okay. Query engines is what it's all about. This is the big idea with Lama Index. So what is a camelid, you might say? Well, camelids include camels, of course, llamas, alpacas, vicunas, guanacos. Hey, look at that. If you were wondering where any of those names came from, they came from the camelid family. And if you're wondering why we don't have any more camelids, well, there's none left in the picture. So maybe that has something to do with it. We've moved on to winds like Mistral and Zephyr. But for this, we're going to look and dig super deep on camelids. Shout out to Ohio State for having really in-depth vet info in their research library on camelids. Apparently, this is where you'll find the International Camelid Institute. And this kind of place, if you're doing work in a place like this, this is the kind of place where you might consider fine-tuning your LLM embeddings, because that's probably definitely going to help improve some of the retrieval and some of the generation. Because otherwise, if you just don't understand the words, you're just going to have a tough time. So building this camelid index, this llama index with llama index looks similar to other indexes that we've built with llama index. And if you consider the ways that you might go about improving retrieval, Lama Index is constantly building out these capabilities. But they're often talking about a lot of different ways that you might start to do more interesting and complicated things. And one of those ways is fine tuning of embeddings. In this particular case, because we have such specialized language, we're going to fine tune those embeddings. The way that we fine tune embeddings is we're going to, and if we're going to have another like joker in here, we're going to have to kick them out again. So bring it. Please mute if you are messing around. The ingredients for fine tuning embeddings are these question retrieved context pairs, right? So what we're going to do is we're actually going to create these question retrieved context pairs, and then we're going to take an existing embedding model, and we're going to train it, so to speak, on CAMELID research paper context. That's it. And we use sort of a very simple approach here using a built-in Hugging Face sentence transformers loss function. And it's really not that complicated. What we see when we do this is that our hit rate, our ability to find the appropriate context, given any particular query, actually improves. And so if you have very specialized languages, you might consider fine-tuning embeddings. And the way you do that, we're going to show you right now. right now. Chris, Camelid Embeddings. Oh yeah, let's get rocking with some Camelids. Okay, hopefully you can see my screen. The basic idea here is we are going to fine-tune our embeddings. So why would we want to fine-tune our embeddings? Well, as Greg said, you know, especially in these kind of veterinary papers, there's just so much language that we have, you know, no idea about or we don't know how it's related to other, you know, tokens in our in our corpus or, you know, it might have one meeting and kind of common parlance, but have a totally different uh in in the case of this specific application so the thing we need to do is we need to I'll link the collab sure one second sorry here we go thing we need to do is fine-tune those embeddings right so first of all get a bunch of dependencies second of all we're going to grab our OpenAI key. Then we're going to get the camel data from our data repository. We're going to go to high-performance RAG, and we're going to download camel papers test and camel papers train. You can see there's a bunch of crazy papers about camelids and and uh you know that's great what is the intuition between the behind the cue retrieved answer pair idea to fine-tune the beddings is the last binary type of last so the the way that they do it actually is they they make the assumption that every other context in the QA pair data set is a counter example to the found context or to like the selected context. And then I can't remember the specific loss function, but I'll bring it up and I'll show you guys. Now that we've got just a lot of papers about camels or camelids to be more precise uh we're going to go ahead and load those uh we're going to load those using our simple directory reader which reads directories our simple node parser and our metadata node our simple node parser is going to uh parse out all of our uh documents into nodes for us yeah yeah i'll bring up the loss uh function for sure once we have these two corpuses we're good to go now we're going to generate QA embedding pairs which we're going to do with everyone's favorite of course AI so we're going to use open AIs gpt 3.5 Turbo to generate QA embedding pairs. Then we're going to save those as a data set. And then we're going to do the same thing for our validation training set. So now we have our validation and we have our train. Everything's good so far. Next, we're going to use the sentence transformers implementation to get BGE small 1.5. It's just a good embeddings model. It's trained on a big corpus, and it performs very well on our task, which is the retrieval task. So that's why we're picking it. The embeddings leaderboards update very frequently, so you can use whichever one you want that performs the best at whatever tasks you need to do. Now we're going to use the sentence transformers fine-tune engine from Lama Index. Thanks, Lama Index. We pass in our training data set, the model we wish to fine-tune, the output that we wish to have our model be saved in, our validation data set, and then the number of epochs we're going to train for. Of course, we could train for more or less time. It's totally up to you. But the idea here is that we have, you know, the same kind of training process that we would for a normal model, but this is for a sentence transformers model. And it's to kind of drag, right? We have these embeddings. If we just imagine them in 3D space, right we we know they're kind of in this cloud and their proximity to each other or their direction from the from the the origin uh you know is it in a particular spot and we're just kind of dragging them around or moving them around re reclustering them in that space in order to align with our actual corpus of documents better. So that's a way you could visualize this if you were a person who liked to visualize things. Once we do all that preparation, we do everyone's favorite step. We call.fintune, and we see it go. And then we can grab our fine-tuned model out of our fine-tune engine now we can set it up as its own embedding model and what we're going to do now is evaluate that embedding model so we've created it and that's good but we need to evaluate it we're going to evaluate it with this. So there's a lot of metrics that you're going to get from this evaluation. We're only going to really care about the map at K. So this is the mean average precision at K. I believe it's map at five that it reports back to us. The idea here is we just want to make sure we're retrieving the right kinds of documents in the top five documents retrieved, right? So how often are we doing that, you know, is kind of what we're caring about here. So we want to, for the most part, always retrieve the correct document in the top five. Now, obviously, we're not going to get to perfect with two epochs of training on a very strange corpus, but we can see that with this evaluation, which is all done through wonderful abstractions, thanks to sentence transformers in this case, not Lama index, We can see that our base unfine-tuned embedding model receives a MAPIT 5 of 0.76, and our fine-tuned embedding model receives a MAPIT 5 of 0.79. So we do see that there is a real increase between the two. Again, this is two epochs on a very, very strange data set. Ideally, we train for longer in order to get this result even better. But it just goes to show you that even with the smallest amount of effort, we can improve these systems to be better at the tasks we need them to perform. One thing I do want to point out or mention, tasks we need them to perform one thing I do want to point out or mention when you're looking at your map at K scores it is important that you set your retrieval K to be the same value as you see uh in the in the metrics right if we have a very high map at K or map at five but we only retrieve three documents we're potentially shooting ourselves in the foot. So you want to make sure that you align the desired behavior of your retrieval pipeline with the metric that you're looking at. Just to point out the fact that, you know, you might not see, let's say we did RAGUS on this, but we kept it at only three retrieved documents, we might not see an improvement. And that's because we weren't actually looking at the right metric in order to make a decision about which is quote unquote better at that task. But with that, I will kick it on back to Greg. All right. So we saw those numbies going up. That was pretty cool. We didn't even train that long, but it did help. And that's pretty legit. You know, that's kind of the big idea. There it is. In a nutshell, fine-tuning embeddings. Lots of other things we could do for any given RAG system. All sorts of fun retrieval. All sorts of fun different node parser stuff in Lama Index to play with. All sorts of different evaluation things we could potentially do to instrument this thing and measure different numbers, see if they go up too. But that's a wrap for this little sesh. There are many ways to enhance retrieval and thus generation. Fine-tuning is one that you might want to pick up for very specialized domain language. And, you know, an example of that is the VET LLAMA index with LLAMA index. So as we wrap up session eight, the final session, session nine, that's not directly related to you guys presenting what you've got today, is going to be on deployment of your app. Once you've got all the logic, all the brains, all the everything in the RAG system, it's time to serve that thing up to users. And so we're going to see how to wrap everything in a chainlet front end, deploy it to Hugging Face, and make sure that you've got that killer demo that you can show live by the end of the day, night, or morning, depending on where you are in the world, as we start to make it into the final hours of the first annual Chativersary Hackathon.", "title": "Session 8: Fine-Tuning Embedding Models for RAG Systems", "duration": 946, "uploader": "AI Makerspace", "upload_date": "20231204", "description": "What you'll learn this session:\n- How to tune open-source embedding models to align with specialized language, like that used for research\n\nSpeakers: \nDr. Greg Loughnane, Founder & CEO AI Makerspace.\nhttps://www.linkedin.com/in/greglough...\n\nChris Alexiuk, CTO AI Makerspace.\nhttps://www.linkedin.com/in/csalexiuk/\n\nApply for one of our AI Engineering Courses today! \nhttps://www.aimakerspace.io/cohorts", "datetime": "2024-06-09T21:10:45.244781"}
|
2 |
+
{"url": "https://www.youtube.com/live/Anr1br0lLz8?si=qz792SKvBHbY-n4N", "transcription": " Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.", "title": "RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1", "duration": 3842, "uploader": "AI Makerspace", "upload_date": "20240207", "description": "GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively!\n\nSpecial thanks to LangChain and RAGAS for partnering with us on this event!\n\nEvent page: https://lu.ma/theartofrag\n\nHave a question for a speaker? Drop them here: \nhttps://app.sli.do/event/2rLa8RML994YsMQt1KLrJi\n\nSpeakers: \nDr. Greg, Co-Founder & CEO\nhttps://www.linkedin.com/in/greglough...\n\nThe Wiz, Co-Founder & CTO\nhttps://www.linkedin.com/in/csalexiuk/\n\nJoin our community to start building, shipping, and sharing with us today!\n https://discord.gg/RzhvYvAwzA\n\nApply for our new AI Engineering Bootcamp on Maven today! \n https://bit.ly/aie1\n\nHow'd we do? Share your feedback and suggestions for future events.\nhttps://forms.gle/ryzhbvxZtbvQ4BCv5", "datetime": "2024-06-09T21:25:23.164053"}
|
train/transcriptions-5760b78f-110c-4cbe-ba2a-f03efaa19339.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/watch?v=EeZIKQmWSXg", "transcription": " Hey, Wiz. So if I'm a super beginner trying to get into fine tuning, should I use Hugging Face and Peth Library or should I maybe pick up Mistral Fine Tune instead? Hugging Face is probably great. Yeah. So is it like a fundamentally different method that is being used for fine tuning between like a PEFT, LoRa, and the approach we'll see today in Mistral Fine Tune? No, no. It's the same thing under the hood. Yeah. Same, same. Okay. Okay. So is it a, quote, lightweight code base that enables, quote, memory efficient and performant fine tuning on mistral models at least yes absolutely it's that yes is hugging face also a lightweight code base that enables memory efficient and performant fine tuning on mr the light the lightweight we can quibble about for sure okay but the But the rest of it, absolutely yes. Okay, okay, okay. But it does the thing. It did the fine tuning, right? It did, yes. Okay, okay. So we're going to sort of try to assess today if this thing provided a, quote, simple guided entry point to fine tune Mistral models. And, of course, we can quibble about simple and guided, but it did the thing today, right? It did the thing. So, you know, it does the thing that it says on the tin. And here we are, folks, another day, another tool. Welcome to the open source LLM Edge, everybody. We're going to dive in and get to the bottom of the concepts and code behind Mistral FineTune. I'm Dr. Greg, that's the whiz, and we are co-founders of AI Makerspace. We're excited to dive into this new tool, and by the end of today, you'll sort of recall what powers and underlies fine-tuning throughout the industry, not just open source tools, but even a lot of the closed source tools that you might have your hands on today. Of course, if you have questions along the way, please use the Slido. We will get to questions probably throughout this event. This is going to be kind of a discussion-heavy one, so keep the questions coming in the Slido. And also, if you've got questions that are super relevant to the discussion we're having at the moment, YouTube live. All right, everybody, let's get into it today. We're going to go ahead and kick off fine tuning Mistral 7B with Mistral Fine Tune. And aligning ourselves to today, we want to really make sure that we understand the legend, Laura, that's at the core of all of the fine-tuning that we see. We want to understand how to use Mistral FineTune. We're going to show you how to do that. We're going to do some instruct tuning with it. And we want to compare and contrast what we saw with this new library to what we're comfortable with, what we're used to with Hugging Face's parameter efficient fine tuning library and methods like LoRa and QLoRa. So we'll start with a review and then we'll dive into what we're seeing from Mistral Fine Tune, talk about Hugging Face versus Mistral Fine Tune. Do some fine tuning and we'll again discuss throughout. So everybody, Laura, let's review here. First off, fine tuning. What are we talking about? Well, we're talking about modifying modifying the behavior of an LLM by updating the weights of the neural network, the weights of the transformer. And full fine-tuning, it means updating all of the weights. But full fine-tuning, because these things are so large, is often quite infeasible for the average Joe, for the GPU poor out there, like we are, and like we know many of you are. And so we need a better way. And the better way that the industry has really adopted is low-rank adaptation. And this is now not full fine-tuning, but rather fine-tuning only part of the neural network, part of the transformer, and using a factorized matrix approach to do so. Let's recall back to the OG paper here. October 2021, light years ago, quote from the abstract, as we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Absolutely classic. Hence we propose LoRa, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the transformer architecture, meaning each attention layer within each decoder block, thus greatly reducing the number of trainable parameters for downstream tasks. Okay, hold on. Say what? Freezes the pre-trained model weights and injects trainable rank decomposition matrices. Hold that thought. We're going to do some stacking and then we'll discuss. Mistral FineTune, just released, says Mistral FineTune is a lightweight code base, memory efficient, performant fine tuning. It is based on LoRa, a training paradigm where, quote, most weights are frozen and only 1% to 2% additional weights in the form of low-rank matrix perturbations are trained. Low-rank matrix perturbations. Okay, so we've got training paradigm, 1% to 2% additional weights in the form of low rank matrix perturbations. That's how Mistral is talking about it today in May, 2024. And the guys from Microsoft that wrote the Laura paper talking about it in 2021 said freezes the pre-trained model weights and injects trainable rank decomposition matrices. Okay, so let's sort of cover a little bit of terminology before our discussion here. One of the things that really inspired the authors of the LoRa paper was a paper written in December of 2020 called Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. And so here we can see a quote from this paper. Common pre-trained models have a low intrinsic dimension. So there exists here a re-parameterization that is effective for full fine-tuning and can be as effective as the full parameter space. So this whole idea of this re-parameterization, that is where LoRa comes in. And of course, we're using a low rank adaptation approach. So it's important to understand the idea of matrix rank. The easy way to sort of understand this is to think about a very simple problem where we have a bunch of columns in our data set, and we're thinking about having a number of linearly independent columns. This idea is very common for anybody that studied matrix algebra. And so we can kind of think of how many features, how many columns are actually giving new information in sort of a broad way. We can sort of contextualize this idea of rank. How much of the information is actually important to let's say, pay attention. And when we think about another classic matrix algebra principle technique that's used here, it's just decomposition. So we're decomposing a problem into its constituent parts, thereby breaking a difficult computation into simpler tasks. So all of this taken together from the idea of an intrinsic dimension, to the idea of low rank, to the idea of matrix decomposition, to the idea of trainable injected decomposition matrices, to low rank matrix perturbations. We're going to sort of wrap all this together in a discussion now. I'd like to invite the Wiz back up to talk through this. Whiz, I think you did a pretty cool video on this idea of Laura quite some time ago. And I wonder if you can kind of just give us your overview, your thoughts, like with the diagram that gets shown in everybody's presentation it's the one on the first page of the paper as you look at this maybe you can walk us through what you see and you know maybe define some of our model, right, we can represent, so think of it this way. Our model has two, two real quote unquote real components, right? One is this idea of a, uh, you know, base weight matrix. And the other is this idea of a, you know, update weight matrix. Now, typically these are not like, we don't need to pull these apart. In fact, we wouldn't because it adds a lot of, you know, overhead where we have to add them back together and everything like this. But the idea is that because we can represent our weight updates as a separate update matrix, and then we can lock in those base pre-trained weights, we can then take that delta matrix and represent it in this low-rank, you know, product matrix form. We have these two matrices that will give us our actual, you know, weight update matrix. So the key insight here is that the base model weights are different than the final fine-tuned weights, and that difference is some delta weight, right? And we can represent that delta weight with this low-rank form. And the idea is we're going to pay computational overhead for this because we have to keep factoring together these matrices and then adding them back. But it's worthwhile to spend that little bit of extra compute in order to save a massive amount of required GPU memory. So while the training is the fine tuning is is is slower, we're adding latency to our training. Right. We massively reduce the amount of actual parameters that we need to hold in memory, actual parameters that we need to hold in memory, which means we can train these models on much smaller than previously used, you know, hardware. And that's the crux of it, right? By training A and B, and then factoring them together and adding them back to that base weight matrix, what we're really doing is we're figuring out what's the best, you know, form for A and B that results in weight updates that make us good at our task. So there you go. Okay. Okay. So it's really important to understand then that this is actually only important during training. Is that right? Where we're sort of actively updating the weights. So that, that, that's a great thing that you've mentioned. So no, well, kind of. We, so the fact that we can represent these matrices, right, as a, as a low rank form means that they are very lightweight, and it means that, you know, well, if we could add these quickly to our base weights, you know, then, well, at inference time, actually, we can just add whatever one we want. So say we had Mistral, for example, and we fine-tuned it to do summarization, right? Well, we'd have like an adapter, right, which is just the LoRa weights that we could apply to that base model to make it good at summarization. But let's say we also fine-tuned it on a math task or a, you know, translation task. Well, at time of inference, we can choose which adapter to use. So it is very important for training, but we can also leverage it in order to make inference more quote unquote powerful. Okay. Okay. Yeah. So we can swap out these low rank adapters at inference. But what we're doing during training is we're essentially like like plugging in an empty adapter and sort of uh training it we're calibrating it to the sort of uh thing we want it to be able to do right i mean this is ultimately when we're done training do we have then an adapter or do we have like a model that's fine tuned so because we kept our base model frozen we never actually touched it right we still have the base model it still exists but we also have this artifact that we've created that we commonly refer to as an adapter right which is just the laura weights and now as long as that base model doesn't change, we can carry those adapters around and then use them like a bit in a drill, right? Whenever we have that base model, we can use those adapters. So that is, it's important to understand in that, that the, exactly, that as long as the drill is the same or the base model is the same, we can use that bit or adapter anywhere. is the same or the base model is the same, we can use that bit or adapter anywhere. We don't have to save the base model every time. We can keep downloading it. We can download it when we need it, yada, yada. We can move this all around. It's fine. But the base model has to be literally exactly the same, right? It's like or else the bit won't fit. Yes, yes. Okay. It's got to be the same drill, right? Yes. Yes, yes, okay. It's got to be the same drill, right? Yes, yes, yes. Okay, so, or at least like the same little claw on the end of the drill. So, okay, so then there's this difference in language between if you read the paper and if you read the Mistral Fine-Tune thing. Can you talk a little bit about this trainable rank decomposition matrix versus matrix perturbations idea why are we getting this sort of um differentiation in language now is where's the perturbation idea coming from exactly it's just a difference in language i mean it's the same it means the same thing so it's not something separate. When we're training a weight matrix, right, we are perturbing the weight matrix. So when we, when we update our weights, we are wiggling it about, right? Or, you know, a fancier way to say wiggling about, of course, is just to perturb. Perturb, yes. per turn per turn yes but there's no difference the language is just uh fancier you know it's it's it's got more college words in it so it's talking about that delta w that little change in weights that were then sort of decomposing into a times b matrix here and so um so then just you know as we sort of think about the other couple letters on this chart here. Okay. Yeah. So I've got X by D. And can you tell me what these dimensions are and why they're relevant here? And then H as well. Yeah. So X is just the, basically when we're talking about, so, okay. The idea is that we have some base matrix and that base matrix is some you know d by d matrix our initial input is x and then our changed input is h right so all that this is saying is we have some d by d matrix which is represented on the left by that big blue square we turn that into a d by r r by d matrix set and then we concatenate those resultant matrices so we do we we get the product matrix of of a and b and then we concatenate it with or we just you know plus it to uh our pre-trained weights which are in form d by d and of course thanks to the way that the matrix math works out r by d times d by r is going to result in a matrix of size d by d so their sizes all match up x is our input and then h is going to be our output from this process X is our input, and then H is going to be our output from this process. All right, all right. So X is basically heading into the transformer. That's what sort of X represents, this X by D. It's sort of the embedded and positionally encoded information, and it's flowing through the block. And then H is sort of the output of, is this the attention layer or is this the output of the entire block here? So this is actually pretty interesting. So we can use LoRa where, where, so ever there is a matrix, it doesn't have to be just the attention mechanism. It can be in the MLPs. It can be anywhere that there's a big matrix that we don't want to be big, but instead wants to be small. So, you know, in the initial Laura paper, we did see that we only applied it to specific subsets of the weights in the model. Specifically, we applied it to QV, I believe, if I'm remembering correctly, but we applied to it only some layers. Now, we're very judicious. We apply it to basically everything, right? Where we're going to apply it to the MLPs, which is the feed-forward network. We're going to apply it everywhere we can. Right. In fact, with things like QLora we found, that's actually even better. It results in better models at the end of the day. But the idea is this is, Lora is not a process that's unique to attention. That's unique to, you know, specific layers in the transformer architecture. you know, specific layers in the transformer architecture. Now it is it's useful because the transformer architecture is specifically large language models are so huge and they have this property of intrinsic load dimension. So we can't just use this in like any old matrix. But for transformer matrices, yeah, we can just we apply it pretty judiciously. We just slap it in there. Okay. Okay. And, and I mean, let's go back to like the whole name, right? We say lower, lower, lower, but it's low rank adaptation. So it really is just sort of a matrix that can kind of now even be applied much more broadly than we thought in the initial paper. Is that right? I would say probably the application space is the same. Large language models is where we're going to see this the most. And then other kind of like larger uh larger models where we've trained things so much that they have this property uh you know the matrices are so huge the data is so plentiful but uh yes it is it's it's a specific the way we apply it has evolved or what we apply it to within that system has evolved even if the actual you know crux of the application is the same which is that's useful for lms it's not very useful like you know for your for your smaller networks or for like uh you know uh things like really small bert you know we're not gonna be thinking about this too much okay okay okay yeah because it's all about reducing the number of trainable parameters and like if we've got a consumer grade gpu and we can do like a relatively complete fine tuning on a pretty small model we'll just do that we don't need laura right it's it's really all about making sure that it aligns to the gpu power of the consumer today. For the GPU poor of us out there, right? All right. Sounds good. Thanks, Wiz. We'll come back to you to show us how to do Mistral FineTune. And speaking of Mistral FineTune, let's take a look a little bit closer at the specific library here. So what we can see with Mistral FineTune is it is this lightweight code base based on LoRa, blah, blah, blah. Now for maximum efficiency, it's recommended that you use a big daddy GPU. The code base is optimized for these kind of things, but for smaller models, we can use a single GPU. And then that's kind of the way we're going to show the fine tuning today. Now they did provide a note here on the repo that the goal is to provide a quote, simple guided entry point to fine tune Mistral models. This is what we're trying to test out today. We'll see what you guys think as well. Did they achieve their goal yet, or is there still work to do with Mistral FineTune? So they walk us through a number of methods that they can use for fine-tuning, a number of types of fine-tuning, specifically at first in the repo. They say, well, you can do pre-training. that is sort of continued pre-training. You can do instruction tuning and you can do function calling. Now, these are all fine tuning. OK, so pre-training is fine tuning, continued pre-training. Instruction tuning is fine tuning. Tuning for function calling is fine tuning. And again, they're all using the LoRa approach under the hood. Now to sort of get this done, it's very simple order of operations, similar to what you would see in any other fine tuning library, prepare the data set, verify it, start the training, make sure your config is right, and then test it out by doing inference. Now they they did also sort of note, hey, you can easily plug in 1B to this. And we went ahead and did that today because, you know, why not? Let's try out all the features and goodies here. When we looked at 1B, we were specifically looking at training loss, evaluation loss, and evaluation perplexity. Although there's a number of other things that Liz will show you is kind of available if you're linked up to 1B to look inside the black box as you're training. Okay. Now, when we think about loss, remember, remember everybody, like, you know, how do we calculate loss? Well, we're going to use cross entropy. Now to go much deeper on cross entropy, join us again next week when we're talking logits and loss. We're going to go back down deep into the transformer and we're going to talk about how to exactly do these sort of weight updates during training associated with the loss function now the other thing that Mistral FineTune allows you to do and this is sort of an open question is this super valuable or not is it allows you to leverage their mixtral models, the mixture of experts models. And this is directly from the repo here. A best practice for mixtral models is that they're like, man, you really should train mixtral models a couple of times independently, because depending on the seed that you use during training, you can actually get a really, really high degree of variance between instantiations of fine tuning of mixtral models. And I've got a quick discussion point here that I want to bring Wiz back up for just in terms of the mixtral. is there a reason why we're not fine tuning mixtral today wiz it seems like it's cooler it's newer is it like harder or something to do this what's the deal it's not harder it's just it i mean in fact it's the same it's just uh you know it's just fine tuning nothing nothing changes but uh the mixtrel models do, you know, they have a GPU requirement that exceeds the possibilities of the CoLab environment. So, you know, remember, Mixtrel doesn't require a ton of active weights for inference, but it does require a lot of weights to be loaded in GPU memory, right? So even though when we're doing inference, we're not touching all those weights, we need to be able to in order to have all of the correct paths available to us through that model, which requires us to have a larger GPU memory capacity, even if we're not going to be using as many as we're doing inference. The inference is still zippy, still fast, but we have to have that capacity to hold the big model and all those available paths in it. That's right. That's right. And as we said before, you can use LoRa on not just the attention layer, but you can also use LoRa on, like you mentioned, the feed-forward layers. And for everybody sort of trying to recall exactly what Mistral kind of looks like and how it's sort of different, you know, from an architectural perspective, that feed-forward network layer is replaced with the sparse mixture of experts layer, right? So you're saying you kind of have to hold each of these kind of mini neural networks here, feed forward network one, two, three, et cetera, et cetera. You got to hold all of this in memory even if you use injectable trainable low rank decomposition matrices you still have to hold all of this there and and that makes it more computationally intensive and remember we not only have to have those low rank decomposed matrices we also need to have those those base matrices those big honkin uh frozen weights which are going to take up all of our capacity right so it's a the the adapters take up very little space thankfully but we gotta load all of this into memory so that every path is available right like it's like if we imagine that each of these, you know, feed forwards is the equivalent of like a door, right? We have to have all the doors available to us, even if we're not going to go through all of them every time, because we might need to get to a different room the next time we go through, right? So we have to have them all there, even though we're not going to use them every time we do through right uh so we have to have them all there even though we're not going to use them every time we do we do any kind of uh forward pass okay yeah yeah yeah makes a lot of sense okay so literally like the more experts you have the more compute you you're just you're forced to use even if you're fine-tuning even with laura even if you're forced to use even if you're fine tuning even with laura even if you're quantizing it just scales out with the number of experts that's right okay all right very cool all right then uh we're gonna set this up so we're getting just about ready to rock and roll into the demo today guys instruction tuning with mistral 7b is going to be based on first of all some instruction tuning data that we've grabbed off of the shelf and we're just going to use the dolly 15k data set so this is available directly on hugging face this is sort of a classic data set that's got a lot of different categories of instructions, closed question answer, classification, open QA, information extraction, et cetera, et cetera. And so it's sort of a broad perspective view. Now, we're not going to use all 15,000 data points for fine tuning, and we're just going to do a few hundred iterations. But this will give us a feel for what the difference is between the model that we use, the base model, and how well it does with our instructions after we fine tune it. Now, we're going to use Mistral 7B Base V3. The only difference between V2 and V3 is like so many models today, that sweet, sweet, long context window. So it's up to 32K, 32, 768 to be exact. And that's the real distinction from the V2. So with that, I'm going to pass it off to the Wiz to show us how to go through Mistral fine tune to do some instruction tuning on Mistral 7b take it away man yes okay so this is pretty straightforward uh thanks to this library however it does require you know we'll talk about it so first thing we got to do is grab some dependencies, pretty standard stuff. So we're going to go ahead and we're going to grab our Mestrel FineTune, which is the repository, which can be found here. The repository has great instructions. It has a tutorial that doesn't work currently, though I'm sure they'll update it. And the basic idea here is pretty straightforward, right? We need to get the model, do some stuff. We're going to walk through the notebook. So we'll get the repository, we'll CD into it, and we'll install all the requirements that we need. Easy peasy. You can ignore these dependency conflicts in the Colab environment, not worried about it. Then we need to download the model. We're going to download the Mistral 7B v0.3. As Greg said, this is a long context model. However, you know, keep in mind that because we're doing this in a collab environment, we're not going to be taking advantage of the long context. You know, it's just not possible to do in the Colab environment, so we're not going to do it. If you're using the recommended equipment, which is a, you know, a node of GPUs, you're going to be fine. But the idea is that we're going to use this 7B v0.3, still a great model, we love to see it. And then we're going to extract that model into a Mistral models folder. Easy. Now, the next step that we have to think about is we have to think about formatting our data set into the correct format. We're going to do instruction tuning. So we're going to follow the instruction tuning guidelines that they give us in their repository. As you can see, the notebook kind of just, you know, this is a JSONL file that we need with, you know, this key messages, which has a list of messages. The messages need to have a role in content, right? And this is very typical if you've seen fine tuning before where we have this role in system, we have this content in the system prompt. And then we have our role user with their content user prompt. And then our role assistant with the content response. And that's it, right? This is a pretty classic example of fine-tuning. And we, you know, it's easy enough to create this JSONL file. You do have to make sure that your data is in this specific format. So it is important that you've contorted things into this format, or else you will not find success, unfortunately. Now, we're going to be using some data from the limit, less is more for instruction tuning. We're specifically going to be using Instruct V1, aka Dolly HHRLHF. And this is the data set that we're going to be using today. It's a fairly standard data set, pretty classic, right? From back in the day, it feels like the idea is we have some instructions, we have some responses, and we're going to train the model to get good at following that instruction task. And that's the idea. Okay, so in order to do this, we're gonna first just create a data directory to shove all our data into. We're gonna cheat a little bit here. We're gonna use Huggy Face Hub instead of just Pandas. Huggy Face Hub is just easy, easy to use, right? The dataset format is familiar and great. We're gonna go ahead and use our notebook login, because if you're using this dataset, it might require accepting a EULA. And in order to make sure we've done that, we'll need to prove that we are who we say we are on Hugging Face. Then we're going to load our dataset, which is from Mosaic ML, Dolly HHRLHF. It's a very fun time. The, you know, H H R L H F. It's a very fun time. Uh, the, you know, the best part of this, uh, you know, Dolly H H R L H F, uh, data set is that it's simple, straightforward. So it's easy to contort it into what we need it to be. As you can see, it's not in a, uh, format that, uh, you know, uh, Mistral expects currently. It's in fact's in fact definitely not in that format, right? So we have to contort it. We're going to create a simple formatting function that does that. We're going to create the expected format in a column called data, where we have our messages, which is a list of messages that contain key role with the role and key content with the content. And away we go, easy peasy. We're just going to make sure that our formatting function works. We're going to do that by testing it on one of the samples. And we're going to go to our messages. We have a system, below is an instruction, design is perfect. And then our user, what is Kangen water? And then we have this explanation. Very cool, very cool very cool okay so we map our mistral fine-tune format function over the entire data set training and test we can see now that we have this data response with about 60,000 prompts and then we have our test set with about 5k prompts nice and good we're going to save those as JSON-L files, since that's what the Mestral Fine-Tune library currently expects. And we can just write these out. We're going to dump the data into that JSON-L file and separate them with new lines. That's where the JSON-L comes from, right? JSON lines. So every line is a new JSON object. And we can do that for our test as well. We're going to call our test eval because we're not actually going to do testing with it. We're going to evaluate during training with it, which is always fun. But it's just called test by default in data sets. So we're going to leave it there. Now we need to verify the dataset. And we're going to enter into what I believe is the current kind of, I guess it would be, I would call it a shortfall of this particular library in its current form, right? So we're going to run these reformat datas. First of all, they error silently for the most part. So if your data is not in the correct format, they might just not say anything. If your data is in a recognizable format that doesn't work, then they will complain, which is what we want. That's ideal. And they do try to reformat. But as they call it in the repo, right, if you have some exotic data, this isn't going to do it, right, you need to kind of do the work to get the format into the shape that is desired by the library, this is not new or specific, you know, it's not specific to Mistral FineTune. Now, the next piece is our training. It's our YAML file. So instead of using kind of, you know, like those long args lists or, you know, a bunch of parameters, we're going to use this idea of a YAML file. And the YAML file is going to dictate everything. So first of all, if we look at their repository and we look at all the different cool hyperparameters we have, sorry for all of the training, but we have a bunch of cool hyperparameters, right? We've got all kinds of stuff. Checkpoint frequency. We've got log frequency. We've got rank. We got it all, right? We're going to express all of this in this.yaml. it all right um we're gonna express all this in this dot yaml now it it's not necessarily like the best uh thing in the world but it works and that's what we want so first of all we need to set up the data part of our yaml file which we're just going to pass in our data a header and then we're going to give instruct data and eval instruct data tag that we pass our, you know, the paths to our training and eval data. Easy peasy. Then we go to our model path for our model ID or path, which is just going to point to the downloaded model that we created. Then we're going to create some hyper parameters, classic hyper parameters. We've got lower rank, sequence length, batch size, micro batches, max steps, learning rate weight. It's got it all, right? But it doesn't have it all, but it has a lot of things. And this is one of the limitations of this particular strategy. It doesn't have it all, right? If we look at the actual kind of options that we currently have available to us, it's not everything that we're used to if we're coming from another library. However, it does make sense, and it works, and that's great. Now, you'll notice that the sequence length being used here is 4K. This is because we have a limited amount of GPU memory. We want to keep it relatively low. So where we might be able to get away with something higher in the 7 to 8k range, we're just going to keep it at 4k to make sure that we're not blowing through our memory. Our LoRa rank is going to be 64, you know, dealer's choice. We just can't make it too high or else we'll run out of memory. And of course, we're only going to do this for 300 steps. So we're not going to fully train on our data set. That would take a very long time. We're going to start a learning rate rather high. And then we're going to decay it at a pretty standard rate, I think, from the chinchilla paper. And then we'll put our output directory to this content slash limit test. And then we just have to convert this to YAML format. So we do that here. You'll also notice we have some other parameters that we can set like seed, how often do we log? How often do we eval? You know, are we going to do eval or not? How often should we save a checkpoint? And then save adapters. So remember that because we're doing this adapter fine-tuning, we need to be able to save those adapters periodically, right? So we're not actually training the model. It's very, it's silly to say because we're definitely training the model, right? But we're actually training these adapters, and the adapters modify the model. And so this is the idea, right? We want to save those adapters, or those two broken out matrices, we want to save those as we're going through training, right? And then our run directory is just going to be where we save this run. We're also going to integrate weights and biases, like Greg said. Weights and biases is an easy integration, which is we just have to, you know, provide these options. Our mistral fine tune is what we're going to call the project. The run name is going to be dolly instruct. We're going to provide our API key. And then we're going to write these things to our YAML. We're going to use offline equal to false. You know, there you go. Then we're going to save all of that out to a YAML file. And we can then use that YAML file to validate our data. And what happens here is that there's a script that validates all of our data. Data is correctly formatted. Stats for the data. We get all this, you know, cool stuff. It also gives us, in a very fun way very fun way an eta so how long this might take right which is pretty cool um and you love that so uh we validate the test we see that everything's no errors we get no errors twice in a row no errors twice in a row probably means that there's no errors which is uh which is always ideal so now that we've done this, we can go ahead and start our training. Training is very straightforward. We just need to make sure because we're in Colab that we provide these additional environment variables so that we target the GPU in our Colab environment. And then we're going to make sure there's nothing in the test folder. And then we're going to run our torch run with our training script from Mr. FineTune. And then we're going to go ahead and point to that same YAML that we just created above that we use to validate our data. So that's great. We love to see that. I see a question in the chat. What do you number of steps mean in this context? That's just the number of iterations that we're going to run through so uh you know when it says our our our file here right we're doing sequence like uh with our batch size and number of micro batches so a number of steps is going to be the number of times that we repeat uh an iteration on a batch which contains eight micro batches so that's the idea. You can see that it's currently training now. We train it beforehand and then we're doing another one just to show off the wand B integration. Very cool. So let's look at wand B. Wand B, as you can see, that's from the completed run. This is from the run that's currently ongoing. So you can see that we have a bunch of different interesting things being tracked. And if we look at something like our training loss, we can see that we have this slowly declining training loss, but it's very noisy. Our learning rate is being decayed, as we would expect. And then we actually just finished a eval in, we'll do many more of. So how will this look at the end? Well, this is an example of the completed run where we have all 300 of our steps. You can see that our perplexity goes down, our training loss, our evaluated training loss goes down, and our eval loss goes down. This is the expectation, of course. As we train, loss go down, a very classic example, right? This is the idea with the YDB integration. This is all just done for us. We don't gotta do anything. You'll love that. So now that we're done training the model, what do we have to do? Well, we've gotta go ahead and use the model, right? So we're gonna go ahead and use Mistral Inference to do this. Mistral Inference is Mistral's version of, you know, how to do inference with the Mistral models, unsurprisingly. We're going to go ahead and we're going to load our tokenizer from the downloaded model. We're going to load our model from the downloaded model. And remember, right, the model is the same. We just need those adapters. So then we load our lora adapters from our training directory and then we can send it a request very similar to how we would open ai very very convenient and then we're going to tokenize our request generate request request and print some results you can see that our results are very straightforward machine learning is a subset of artificial intelligence allows computers to learn from data without being especially programmed i mean it's great right it does the thing it follows the instruction the instruction was to uh explain machine learning to me in a nutshell so it did great uh and that is mistral fine tune a library that helps us fine-tune mistral models uh don't forget to like comment comment, subscribe, hit the bell. It helps us out. We're here every Wednesday, having a great time talking about cool stuff. So thanks. I'll pass you guys back to Greg. Thanks, Wiz. So that's Mr. Fine Tune. And the last thing that we'd like to sort of point out is, so, you know, how are these two things different we'll kind of kick off our discussion here let's remind ourselves that full fine-tuning the sort of problem with it is that it's really not cool if you're GPU poor and so the hugging Face libraries use these parameter efficient fine-tuning methods that are just simply better than full fine-tuning. Same problem, right, that it's trying to solve. The number one PEFT method is LoRa. That's the one you should know. And if you're a beginner, as we mentioned in the beginning, you should probably still start there. But Mistral FineTune does do the thing. Their CDN, their content delivery network, is rather slow. It took nearly an hour, maybe 45 minutes, to download the model. Their opinionated data formatting is going to give you some potential issues if you have complex data formatting. And remember, Mixtrel is simply a more compute intensive thing to deal with, not to mention you need to do multiple runs because of the nature of the way the Mixtrel models work, aligning with their best practices in the repo. And then LoRa just sits at the bottom of everything. You can do it on attention layers. You can do it on multi-layer perceptrons, feed forward networks. You can do it at inference. You can plug in the adapter. You can plug in the empty adapter and calibrate it during fine tuning. And so make sure that you do have a handle on the concepts beneath the code. And that is Laura. To kick off QA, I'd like to go ahead and invite Wiz back up to the stage. One more little discussion point. So as we think about Hugging Face versus Mistral FineTune, what jumps out to you as similarities and differences that people should keep in mind yeah i mean they're both used for fine-tuning models uh they both will fine-tune models so you can find two models with both you love to see that uh otherwise it's i mean the differences are quite superficial it's doing the same stuff under the head. Transformers had a long time, right? To, to, to polish this out, to build things that, that work exactly the way you expect and have all of the bells and whistles we've come to love about that, that kind of a library, right? And, and Mistral's just getting started. So I imagine that over time, you know, this Mistral fine tune will evolve into a solution that makes a lot of sense and is quite useful. For the time being, I think, you know, they're on the path. It's a good first couple steps in that direction, but the ease of use is just not there yet, in my opinion. Okay. All right. Yeah, it takes a long time to create really, really clean, user-friendly products. And, you know, Mistral's putting out a bunch of stuff these days. Look forward to seeing what they continue to put out as a true competitor to OpenAI, it seems, across the sea. All right. So we are going to get started with Q&A. We got a solid 10 minutes, everybody. We've got a QR code up in the top of your screen that you can add questions to and upvote questions you like best. I'm going to go ahead and kick it off with the first upvoted question today. Can we combine these adapters i mean training one to program another for medical and combined together um let's just sort of talk about combining adapters i guess yeah i mean you can model merging exists and basically is that, uh, so yes, the answer is just a simple yes. Um, we, we can't do that. Yeah. Yeah. And model merging is basically like add them together, right? Like, uh, these perturbations, let's say these injectable rank, injectable low rank decomposition matrix perturbations to the weights. That's what we're sort of adding together when we do the model merging. And we do have some model merging material that we've gone through recently with the creator and with RC on our YouTube channel. Check that out. Next question. So can we invent a multi-adapter instead of multimodal? How does multi-adapter fit into multimodal? And I think this is sort of a different question here, baked in here, Rami, and have one adapter or have one adapter as a router. Maybe we'll take those one at a time. So multi-adapter for multimodal. Yeah. So probably won't be for multimodal. It's not like each adapter will handle a separate modality, but it is the case that we can create a multi adapter system instead of multiple models. Um, but, uh, in terms of like getting a vision model or like a audio model as an adapter to like a language model, it's not going to work. We need to have that image modality or language modality as part of the model already. Um, uh, and then having one adapter as a router, having one model that we build a router for that uses the right adapter. Yeah, sure, absolutely. That's possible. We might use like a simple classification model on the head of it in order to route it to the correct adapter. But that's still a space that's being explored very much by people well i mean that kind of that kind of you know reminds me of the idea that within the repo we have the function calling capability and of course when we talk about fine tuning for function calling, we can very easily sort of imagine a router being used in a more agentic way, right? And so I think one of the key things that I want everybody to kind of take away that maybe isn't obvious to everybody is that function calling is just another form of fine tuning it just requires what a more specific formatting whiz that's basically it that's it yeah okay all right so uh what's the best gpu to buy uh that's a here's a good one for you liz uh what's the best gpu for small scale industrial application industrial application 4090 just get a 4090 uh you know it's a great it's a great card 3090 will also do 390 ti i think is the 24 gig uh card uh you don't need to spend you know like enough for like a ada a6000 you don't need to so um yeah basically just uh accumulate don't need to so um yeah basically just uh accumulate cards that have 24 gigabytes of gp ram and whatever flavor is cheapest to you and then go from there and just stack them together till you have your little 24 gig card cluster um okay so don don asks isn't YAML less likely to handle data format issues well compared to JSON? So we're only using the YAML for like the configuration file. Everything else is in JSON or JSON-L and like the data is held in JSON-L. We're just using YAML as the config. But yeah, it's just like a choice. YAML, config config name a more iconic duo i can't i can't name one yeah yeah okay can we can we do this without 1b i know the answer to that yes you can it's definitely optional um any other comments on that we like 1b oh yeah 1 yeah. One B is great. Uh, it's free. It works. The, the real, the, the, the real thing to say is like, you should just use one B because it's free and it's great. Yeah. It's funny. Cause we were having the same discussion in class last night. Like, why should we use one B? Why oneB? It's like, I think that's a good enough reason. Yeah, it's free and it's great. Okay, another question from Rami here. Any guide sessions or scripts to prepare and test a standard data set for LAMA, Mistral, or other LM fine tuning data set formats? I think this is a data set formatting question. I think this is a dataset formatting question. I think probably point you to specific fine-tuning events. We've got a fine-tuning playlist that, you know, if we did Llama, you got to put in Llama format. If we did Mistral, got to put it in Mistral format. We've done other ones like OMO and, you know, a few others as well. Check out our fine-tuning playlist. Anything else to add there, Wiz? No, I think that's a great place to start. It's just a lot of reading and working, but you'll get it quickly. If we thought a dataset formatting event would slap, we would do one. This is the first time I've ever heard that feedback you guys want it let us know we'll put it together how does the choice of optimizer like adam stochastic gradient descent impact the performance of a fine-tuned laura model is there like a right answer for optimizer? The right answer is Adam or a version of Adam. Just straight up. That's what everyone will use, does use. They have like paged versions. They have fused versions. They have all kinds of fun kernel optimizations that make it very zippy, very quick. So Adam is basically where we're at. Hmm. The here's an interesting question. So since we brought up this attention layer versus MLP layer fine tuning, which one's better? Which one should I do? fine tune attention layer fine tune MLP? Why do it all? Oh, would you? Yeah, I mean, I guess. You know, target either or if you really wanted to, but would you yeah i mean i guess you know target either or if you really wanted to but like intuitively like attention feels like the place to start but i don't know we we'll do all of it just because it's it's recommended thing to do right it's uh it's easiest and it's the lowest uh memory and we're gonna be fine to be very clear we're gonna be fine tuning those layers no matter what it's just whether or not we're doing full fine tuning or laura adapter fine tuning that's different but they're they're gonna get fine-tuned uh it's adapter fine tuning that's different but they're they're gonna get fine-tuned uh it's uh we have to uh so there you go boom there it is that's a great place to wrap up thanks wiz for showing us mr fine tune today and that brings us to the end of today's event don't forget to like and subscribe and ring that bell if you like this session and you're not in our Discord yet, you should definitely join. We got great community vibes going. And I'd really love to see folks that join also consider building and deploying their first ever LLM application. This is the Beyond Chat GPT event that we put together a while ago now at this point. And it's something that we require for everybody that takes our AI engineering bootcamp. So if you're up for a challenge, I would encourage you to see if you can build and deploy your first LLM and share it in Discord in the Build Ship Share channel. There's a ton of awesome activity going on all the time with folks building their very first application. Now, if you really want to accelerate your AI engineering learning, you might check out our AI engineering boot camp. We've got a lot of great, cool, fun, interesting announcements coming soon. We just launched cohort three cohort four is in August. So you can start planning for it. Now, if you want to learn with me, with a great group of peers, AI engineers, leaders, and many others, as well as get access to really high quality opportunities to get in front of hiring partners based on your certification. Consider this as a pathway in 2024 for you. Next week, we talk loss functions for our Logits and loss event, all on training and fine tuning. We're going down deep into the transformer again, so join us for that one. And finally, provide any feedback that you have. We take it seriously and we try to improve all the time. As always, in the meantime, we will do our best to keep building, shipping, and sharing. And we hope that you do the same. Thanks, everybody. Have a great rest of your week, and we'll see you all real soon. Bye, guys.", "datetime": "2024-06-09T19:20:46.574940"}
|
train/transcriptions-85f73827-bedb-44b6-a98d-c30a960b96c9.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/watch?v=Anr1br0lLz8", "transcription": " Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or correct? Well, it's really hard to say things like it's absolutely right, it's absolutely correct, it's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes. It's absolutely correct. It's absolutely true. That's pretty difficult. Okay. Okay. So there's no absolutes, but is there a way to know that changes that we make to the system to our RAG application makes the performance better or worse? That we can know absolutely. So you're saying there's a way to assess RAG systems? Yeah. I think like assess RAG systems? Yeah. I think like a RAG assessment kind of make. A RAG assessment. Yeah, man. Let's show everybody how to do that today. Let's do it. All right, man. My name is Greg and we are here to talk RAG eval today. Hey, I'm Makerspace. Thanks for taking the time to join us. Everybody in our community, we'd love to hear your shout out in the chat where you're calling in from. Today we're going to walk through a simple RAG system built with the latest and greatest from Langchain, their most recent stable release and most stable version ever, we're also going to outline how you can actually assess your RAG systems using the RAG assessment or RAGIS framework. Finally, we'll do some advanced retrieval. We'll just sort of pick one off the shelf that's built into Langchain and show how we can go about this improvement process. We are very excited to have the Ragas co-founders and maintainers Jitin and Shaul joining us for the Q&A today. So definitely get your questions in the chat, anything you're curious about Ragas. We have the creators in the house today. And of course, we'll see Wiz, aka the LLMm wizard and cto at am makerspace back for demos real soon so let's get into it everybody today we're talking rag evaluation this black art that everybody is really really focused on as they start to build prototype and deploy these systems to production in 2024. as we align ourselves to this session we want to get out of this what's up with this langchain v 0.1 that just came out we want to understand how we can build a rag system with the latest syntax and then also evaluate it there's a lot of changes happening on the ragas side just as on the langchain side finally we want to see how we can pick different tools different ways to improve our system our application see how we can pick different tools, different ways to improve our system, our application, and how we can then quantify that using evaluation. So first we'll go into laying chain, then we'll go into a high level view of RAG and see exactly where the different laying chain components fit in. Finally, we're going to see what you all came here for today, the RAGIS metrics and how to implement the RAGIS framework. So we'll be building, we'll be evaluating, we'll be improving today and the Q&A should be pretty dope. So, Langchain v0.1.0. What's Langchain all about again? Well, it's all about enabling us to build LLM applications that leverage context, our so-called context aware, so we can connect to other sources of data. We can do lots of interesting prompt engineering. We can essentially do stuff in the context window that makes our applications more powerful. And also reasoning. This is the agentic behavior stuff. And look for another event from us soon that focuses more on reasoning. Today, we're focused on context, though. And we're doing that in the context of V0.1.0. The blog that they put this out with said, the journey of a thousand miles always starts with a single step. And that's kind of where Langchain sees themselves to be today. Langchain Core has come together, Langchain Community has come together, and they're officially going to be incrementing v0.1 to v0.2 if there are any breaking changes they'll be incrementing this and they'll continue to support v0.1 for a time every time this gets incremented of course as bug fixes and new features come out, they're also going to be incrementing now in this third v0.1.x slot. So pay attention to how quickly the development goes from here, because I imagine there's a lot of great stuff on the horizon coming from Langchain. There was a lot of great stuff in the v0.1 release. There was a lot of great stuff in the v0.1 release. And we're going to primarily focus on retrieval today, and also on this sort of langchain core that leverages L-C-E-L or the langchain expression language. So in terms of retrieval, there's going to be a lot that you can check out and add after today's event that you can then go assess to see if it actually helps your pipelines. So definitely encourage you to check those things out in more detail after today. For production components, there's a lot that we hope to explore in future events as well. But starting from the ground up here, we want to kind of focus on this Langchain core. This is the Langchain expression language, and this is really a very easy kind of elegant way to compose chains with syntax like this. This dovetails directly into deployments with LangServe, into operating in production environments and monitoring and visibility tooling with LangSmith. So really it kind of all starts from here and allows you to really do some industry-leading best practice stuff with these tools. Now today we're going to focus on a couple of the aspects of Langchain. We're going to take Langchain core functionality, and then we're also going to leverage models and prompts, as well as retrieval integrations from Langchain community. Chains, of course, are the fundamental abstraction in laying chain, and we will use those aspects to build our RAG system today. When we go and we assess, then we're going to take it to the next level with an advanced retrieval strategy. This is going to allow us to quantitatively show that we improved our RAG system. So quick recap on RAG in general for everybody. The point of RAG is to really help avoid these hallucinations. This is the number one issue. Everybody's talking about confident responses that are false. We want our systems, our applications to be faithful. And we'll see that we can actually evaluate this after we build out systems and instrument them with the latest evaluation tools. We want them to be faithful to the facts. We want them to be fact checkable. This idea of RAG is going and finding reference material, adding that reference material to the prompt, augmenting the prompt, and thus improving the answers that we generate. Visually, we can think of asking a question, converting that question to a vector, embedding representation, And then looking inside of our vector database, our vector store, the place where we store all of our data in vector format, we're looking for similar things, similar to the vector question we asked. We can find those similar things. And if we've set up a proper prompt template before we go into our LLM, something that says, for instance, use the provided context to answer the user's query. You may not answer the user's query unless you have context. If you don't know, say, I don't know. And then into this prompt, we inject these references, we augment this prompt. And then of course, where does the prompt go? Well, it goes into the chat model into our LLM. This gives us our answer and completes the RAG application input and output. So again, RAG is going to leverage models, prompts, and retrieval. In terms of models, we're going to use OpenAI models today. One note on syntax is that the chat style models we use generally leverage a system user assistant message syntax and Langchain is going to tend to prefer this system human AI syntax instead which personally I think is a little bit more straightforward in terms of the prompt template well we already saw it this is simply setting ourselves up for success so that we can inject those reference materials in and we can generate better answers. Now, it's important what these reference materials contain and how they're ordered. And that is going to be the focus of our evaluation. Of course, when we create a vector store, we're simply loading the docs. That's a document loader. Splitting the text. That's the text splitter. Creating embeddings. We use an embedding model. And storing the vectors in our vector store. Then we need to wrap a retriever around, and we're ready to rock and rag. Our build today is going to leverage, as mentioned, OpenAI models. We're going to leverage the Ada Embeddings model and OpenAI's GPT models. And the data we're going to use is actually, we're going to set up a rag system that allows us to query the Langchain v0.1.0 blog. So we'll read in this data and we'll create a rag based on this Langchain blog that we can ask, see if we missed anything that we might want to take away from this session that we could also learn about the 0.1.0. So to set up our initial rag system, we're gonna send you over to Wiz to show us Langchain v0.1.0 RAG setup. Hey, thank you, Greg. Yes. So today we're going to be looking at a very straightforward RAG pipeline. Basically, all we're going to see is how we get that context into our LLM to answer our questions. And then later on, we're going to think about how we might evaluate that. Now, the biggest changes between this and what we might have done before is the release of Langchain v0.1.0. So this is basically Langchain's, you know, first real minor version. We're looking to see this idea of, you know, splitting the core langchain features out. And that's exactly what, you know, Greg was just walking us through. Now, you'll see that we have mostly the same code that you're familiar with and used to, we can still use LCL, as we always have have that staying part of the core library. But we also have a lot of different ways we can add bells and whistles or different features to our Langchain application or pipeline. So in this case, we'll start, of course, with our classic import or dependency Langchain. We noticed we also have a specific package for OpenAI, for core, for the community Langchain, as well as Langchain Hub. And so all of these let us pick and choose, pick and choose whatever we'd like really, from the Langchain package. This is huge, right? So one of the things that people oftentimes are worried about language there's a ton of extra kind of uh unnecessary things in there well this is you know goes a long way to solving that problem um and it's awesome so let's see first which version we're working with uh so if you're watching this in the future you can be sure so we're on version 0.1.5 so we're already at dot five um line chain you know they're they're hard at work over there uh we're gonna need to add our open AI API key since we are going to be leveraging open AI uh basically this is a uh you know way that we can both use our lm for evaluation but also for generation and also for powering the application. We're just going to use this one LLM today for everything. When it comes to building our pipeline, it's very much so the case that, you know, we have the same stuff that we always have. We need to create an index and then we need to use an LLM to generate responses based on the retrieved context from that index. And we're going to get started as we always do with creating the index. Now we can and will still use LCEL. LCEL is important. You know, one of the things that we're going to show in this notebook, because you don't have to use LCL, they've implemented some abstractions in order to modify the, you know, the base chains that you're used to importing to LCL format, so you get all the advantages. But we're still going to look at LCL today, because it is an important piece of the line chain puzzle. because it is an important piece of the Langchain puzzle. But first, we're going to start with our first difference, right? So we're going to load some data, and we're going to load this from the Langchain community package where we're going to grab our document loader to get our web-based loader. You know, importantly, this is not part of core Langchain. This is a community package, and it works exactly the same as it used to, as it always has. You know, our web-based loader is going to let us load this web page, which we can do with loader.load. And then we can check out that we have our metadata, which is just for our web page. We're happy with that. Next, we need to do the second classic step of creating index. We have a document in this case. You know, it's just one document, but we have it and we need to convert it into several smaller documents, which we're going to do with the always fun recursive character text splitter. You'll notice that this has stayed part of core. So this is in just the langchain base package. Hooray. We have a recursive character text splitter. We've chosen some very arbitrary chunk sizes and overlaps here and then we can split those documents this is less so focused on a specific uh Lang chain rag and more on the evaluation so we're just kind of choosing these values uh you know to to showcase what we're trying to showcase you see that we've converted that one web page into 29 distinct documents. That's great. That's what we want to do with our splitting. Next, we're going to load the OpenAI embeddings model. Now, you'll notice that we're still using text embedding AIDA 002. We don't need to use this embeddings model. And it looks like very soon we'll be able to use OpenAI's latest model once the tick token library updates there's a PR that's ready just waiting to be merged which is going to let us be able to do that but for now until that change is implemented we're going to stick with text data embedding 002 and this is like the classic embedding model, right? Nothing too fancy. Just what we need. When it comes to our face vector store, what we need is to get that from lane chain community. But otherwise, this is exactly the same as it used to be, right? So there's no difference in the actual implementation of the VectorStore. It's just coming from the community channel. We'll pass in our split documents as well as our embedding model and away we go. Next, we're gonna create a retriever. This is the same as we've always done, dot as retriever on our VectorStore. Now we can interact with it through that retrieval API. We can test it to see it working. Why did they change to version 0.1.0? And we get some relevant documents to that query that mention the 0.1.0 release. Hooray. Now that we've got our retrieval pipeline set up, that's the R in RAG, we need to look at creating that AG. So what we're going to do is showcase a few different ways that we can create a prompt template. You can just pull it from the hub. So there are lots of different community created or Langchain created hubs. The idea is that, you know, you can just pull one that fits your task from the hub, but the one that we're showcasing is maybe not ideal. So we're going to go ahead and create our own. You can still do this process if you want to create your own. You don't have to use a, you know, one from the hub. And so we're just going to create the simple one, answer the question based only on the following context. If you cannot answer the question in context context please respond with i don't know that's a classic we pass in our context we pass on our question away we go and you'll notice that this is exactly the same as it used to be let's go laying chain now we'll set up our basic qa chain i've left a lot of comments here in the uh implementation of this uh lcel chain in order to hopefully clarify exactly what's going on. But for now, we'll just leave it at we can create this chain using LCL. And we want to pass out our context along with our response. This is important in order for us to be able to do those evaluations that we're hoping to do with RAGUS. So we do need to make sure that we pass out our context as well as our response. This is an important step. And we'll look at another way to implement this chain a little bit later, which is going to showcase a little bit more exactly what we can do to do this a little bit easier with still getting the advantages of LCL. You'll notice we're just using GPT-305 Turbo. That's it. And there you go. Now we can test it out and we can see, you know, what are the major changes in v0.1.0? The major changes are information. It goes on, it gives a correct answer. That's great. And we have what is a laying graph. And basically the response from the LLM is, I don't know, which is a laying graph. And basically, the response from the LLM is I don't know, which is not necessarily satisfying. So we're going to see a way to improve our chain to get a better answer to that question. And the next step now that we have this base chain would be to evaluate it. But before we do that, let's hear from Greg about how we're going to evaluate it and what we're going to evaluate it with. And with that, I'll pass you back to Greg. Thanks, Wiz. Yeah, so that was Langchain v0.1.0 RAG. Now let's talk RAG assessment. The RAGIS framework essentially wraps around a RAG system. If we think about what comes out in our answer, we can look at that, we can assess different pieces that helped generate that answer within the RAG system. And we can use that information to then decide on updates, on different things that we might try to add to either augment our retrieval or our generation. And we can continue the process of improvement by continually measuring. But what are we measuring? Well, this is where the RAG evaluation really gets particular. We have to make sure that we understand the core concepts of RAG eval. And in order to sort of do this in an automated way, we need four primary pieces of information. You're probably familiar with question, answer, input, output, and you may even be familiar with question, answer, context triples. What we need for eval is we need to also add a fourth component, the ground truth, sort of the correct or right answer, so to speak. Now, in practice, it's often not feasible to collect a comprehensive, robust ground truth data set. So again, what we can do, since we're not focused on absolutes here, is we can actually create a ground truth data set synthetically. And this is what we'll do today. We'll find the best model that we can, pull GPT-4 off the shelf, and we'll generate this set of information that will allow us to do evaluation. Okay, so we'll see how this works. It's pretty cool. And Ragus has a new library for this. But in terms of actual evaluation, when we finally have this data set up, we need to look at two different components. The first component is retrieval. There are two metrics that focus on retrieval exclusively. One is called context precision, and context precision asks the question, how relevant is the context to the question? All right, context recall, on the other hand, asks the question, is the retriever able to retrieve all of the relevant context relevant to the ground truth answer? On the generation side, we have two metrics as well. The first is answer relevancy, which asks the question, how relevant is the answer to our initial query? And finally, faithfulness tries to address the problem of hallucinations and asks, is the answer fact checkable from the context or is this a hallucination? So the four primary metrics in the RAGUS framework are these four, two for retrieval, two for generation. Let's dig in a little bit deeper to each one so that we really try to start grokking each metric individually because they're slightly different but nuanced. Faithfulness is trying to measure this factual consistency. Let's look at an example. The question, where and when was Einstein born? Context. If this is the context, Albert Einstein, born 14 March 1879, was a German-born theoretical physicist, etc., etc. So a high faithfulness answer is something that says, well, he was born in Germany and he was born on 14 March 1879. Where a low faithfulness answer might get part of it right, but might hallucinate, right? We want to avoid these hallucinations with faithfulness. So we're looking at the number of claims that can be inferred from the given context over the total number of claims in the generated answer. To be 100% faithful to the facts, we want this to be the same number. Okay, so answer relevancy is trying to, of course, measure how relevant the answer is. Rather than considering factuality, how factual it is, what we're doing here is we're penalizing when the answer lacks completeness or on the other side, when it contains redundant details. So, for instance, where is France and what is its capital? A low relevance answer is like talking to somebody that's not paying attention to everything that you said. Oh, France is in Western Europe. It's like, yeah, okay, well, what about the other part of my question, right? You want it to be completely relevant to the input, just like a good conversationalist's answer would be. Very relevant, right? Okay, so context precision, as we get into the retrieval metrics, we're thinking about, in this case, a way that we can evaluate whether all of the ground truth relevant items are present in the context and how well ranked they are in order. So what we're looking for is we want all the most relevant chunks that we return from our vector database to appear in the top reference ranks. Okay. We want lots of good stuff ranked at the top. That's what we want. And so we're really looking for everything that's relevant to the question to then be returned in our context and to be order ranked by relevancy. Makes sense, you know, just the way we would want to do it if we were writing a book report or something. Finally, context recall is again kind of doing this same thing that we talked about before. We want to make sure we're paying attention to everything that's relevant. We want to make sure that we're addressing everything that's asked. So if the question here, where is France and what is its capital? Once again, if we have a ground truth answer already, the key here is we're actually leveraging ground truth as part of calculating this metric. France is in Western Europe and its capital is in Paris. A high context recall is addressing both of these. And within each sentence of the output addressing both of these. You can look sort of ground truth sentences that can be attributed to context over number of sentences in ground truth. And a low context recall is going to kind of be doing the same thing that we saw earlier. Well, France is in Western Europe, simple villages, Mediterranean beaches, country is renowned, sophisticated cuisine, on and on and on, but it doesn't address anything about Paris, which of course the ground truth does. And we can start to get a picture of, if we look at each of these metrics, we get some idea of how our system is performing overall. But that's generally kind of difficult to get a perfect picture of that. These are the tools we have, and they work, as we mentioned, very well for directional improvements. Context precision is sort of conveying this sort of high-level quality idea, right? Not too much redundant info, but not too much left out. Context recall is measuring our ability to retrieve all of the necessary or relevant information. Faithfulness is trying to help us avoid hallucinations. And answer relevancy is sort of, am I to the point here? Am I very, very relevant to the question that was asked? Or am I kind of going off on a tangent here? And finally, RAGUS also has a few end-to-end metrics. We're just going to look at one of them today, just to give you an idea. And that one is called answer correctness. This is a great one for your bosses out there. You want to know if it's correct? Boom. How about we look at correctness, boss? So this is potentially a very powerful one to use for others, but beware, you know what's really going on and directional improvements is really what we want to be focusing on. But we want to basically look at how the answer is related to the ground truth. Of course, if we have like a true ground truth data set, this is probably a very, very useful metric. If we have one that's generated by AI, we might want to be a little bit particular, a little bit more careful in looking at this metric and relying on it too much. But if we have this great alignment between ground truth and answer, we're doing a pretty good job, right? Let's see a quick example for this one. We're kind of looking at two different things. We're looking at that factual similarity, but we're also looking at semantic similarity. So, you know, again, you can use this Einstein example. If the ground truth was Einstein was born in 1879 in Germany, the high answer correctness answer is exactly that. And then of course, low answer correctness is you're getting something literally wrong. So there is overlap between all of these things and it's important to sort of track that. But overall, the steps for doing RAGIS are to generate the question answer context ground truth data. And there's a awesome new way to do this called synthetic test data generation that has recently been released by RAGUS. We'll show you how to get it done today. Run that eval and then go ahead and try to improve your RAG pipeline. We're just going to take one simple retrieval improvement off the shelf from Langchain today. It's called the multi-query retriever. This is going to sort of generate many queries from our single query and then answer all of those and then return the relevant context from each of those questions into the prompt. So we're actually getting more information. But you can pick any retrievers off the shelf and you can then go back, you can look, did my metrics go up? Did they go down? What's happening as I add more data or more different retrieval advanced methods to my system? And in this way, we can see how we can combine RAGIS with RAG improvement as Wiz will go ahead and show us right now. Oh yeah, Greg, can't wait. Thank you. So RAGIS, this is the thing we're here to talk about, right? It's a amazing library that does a lot of cool, powerful things. But the thing that is, you know, most important is that it allows us to have some insight into changes we make in terms of the directional impact they have, right? So while we might not be able to say, you know, these answers are definitely true, as Greg was expressing, we can say, it appears as though these answers are truer than the ones we had before, which is awesome. So let's look at how we can do this. First of all, in order to actually do, you know, a evaluation on all of the metrics, we'd have two important things. One, we need to have questions. So these are questions that are potentially relevant to our data. In fact, they should be relevant to our data if we're trying to assess our retrieval pipeline, as well as our generations. And also some ground truths, right? As Greg was mentioning, you know, we are going to use synthetically created ground truths. So it might be more performant to use, let's say, you know, human labeled ground truths. But for now, we can let the LLM handle this. I'll just zoom in just a little bit here. And the idea is that we're going to leverage Ragus's new synthetic test data generation, which is very easy to use, much better than what the process we had to do before, which is kind of do this process manually. We're going to go ahead and use this to create our test data set. Now, it's important to keep in mind that this does use GPT-3, 5 Turbo 16 K as the base model, and it also includes GPT-4 as the critic. So we want to make sure we're not evaluating or creating too much data, or if we are, that we're staying very cognizant of the costs. So the first thing we're going to do is just create a separate data set or separate document pile that we're going to pull from. We're doing this to mitigate the potential that we're just asking the same LLM, the same questions with the same context, which might, you know, unfairly benefit the more simple method. So we're just going to create some new chunks with size 1000, overlap 200. We're going to have 24 docs, so about the same, 29, 24. And then we're going to use the test set generator. It really is as easy as test set generator with open AI. That's what we're using for our LLM. And then we're going to generate with langchain docs. You'll notice this is specifically integrated with langchain. There's also a version for Lama index. And all we need to do is pass in our documents, the size that we like of our test set, and then the distributions. Now this distributions is quite interesting. Basically, this is going to create us questions at these ratios from these subcategories. So the idea is that this is going to be able to test our system on a variety of potentially different, you know, tests, right? So we have simple, which is, you know, as you might think, very simple. And we have, you know, this reasoning, which is going to require some more complex reasoning that might, you know, tax our LLM a little harder. And then we have this multi-context, which is going to require multiple contexts. So our LLM is going to have to pick up a bunch of them in order to be very good at this particular kind of task. And the reason this is important is that not only do we get kind of an aggregate directional indication of how our system is improving, but we can also see how it's improving across specific subcategories of application. Very cool, very awesome. Thanks to the RAGUS team for putting this in. You know, we love this and it makes the job very much a lot easier. So that's great. We look at an example of the test data. We have our question, we have some contexts, and then we have our ground truth response, as well as our evaluation type, which is in this case, simple. In terms of generating responses with the RAG pipeline, it's pretty straightforward. There is an integration that exists between Langchain and RAGIS. It's currently being worked on to be brought up to speed. But for now, we're just going to kind of do this manually. So what we're going to do is we're going to take our test set. We're going to look and see. We've got our questions, context, ground truths, as well as our evolution type. This is our distribution that we talked about earlier. And then we're going to grab a list of questions and ground truths. We're going to ask those questions to our RAG pipeline. And we're going to collect the answers and we're going to collect the contexts. And then we're going to create a Hugging Face data set from those collected responses along with those test questions and our test ground truths. We can see that each of the rows in our data set has a question with our RAG pipeline's answer, our RAG pipeline's context, as well as the ground truth for that response. Now that we have this data set, we're good to go and we can go ahead and we can start evaluating. Now, Greg's talked about these metrics in depth. The code and the methodology can be found in the documentation from Ragas, which is very good. These are the ones we're caring about today. Faithfulness, answer relevancy, context precision, context recall, and answer correctness. And you can see it's as simple as loading, importing them, and then putting them into a list so that when we call the evaluate, you know, we're going to pass in our response data set, which is this data set we created above that has these rows for every question, and then our metrics, which we've just set above. That's all we have to do. Now, the test set generation is awesome and very useful. Another change that Ragas made recently is that they've made their evaluation async. This is a much faster process than it used to be. As you can see, this was around 42 seconds, which is much better than the times that we used to see. Thanks, Ragas team, for making this change. We can get our results here. We have our faithfulness, our answer relevancy, our context recall, our context precision, and our answer correctness. You can see that it does all right. But again, these numbers in a vacuum aren't really indicative of what's happening. It's like we want these numbers to be high, but we're more interested in seeing if changes we make to our system make those numbers higher. So let's look at another awesome part of RAGUS before we move on to making a change and seeing how it goes, which is we have the ability to look at these scores at a per-question level in the Pandas data frame. So you can see that we have all of our scores and they're given to us in this data frame this is huge especially because we can map these questions back to those evolution types and we can see how our model performs on different subsets of those uh those distribute the elements of that distribution so now we're going to just make a simple change. We're going to use the multi-query retriever. This is stock from the Langchain documentation. We're going to use this as an advanced retriever. So this should retrieve more relevant context for us. That's the hope anyway. We'll have our retriever and our primary QA LLM. So we're using the same retriever base and the same LLM base that we were using before. We're just wrapping it in this multi-query retriever. Now, before we used LCEL to create our chain, but now we'll showcase the abstraction, which is going to implement a very similar chain in LCEL, but we don't have to actually write out all that LCEL. So we're going to first create our stuff documents chain, which is going to be our prompt. We're using the same prompt that we used before. So we're not changing the prompt at all. And then we're going to create retrieval chain, which is going to do exactly what we did before in LCL, but it's, you know, we don't have to write all that LCL. So if you're looking for an easier abstracted method, here you go uh you'll notice we call it in basically the same way and then we are also looking at uh this answer the answer is basically uh you know the response.content from before and then uh you know we can see this is a good answer makes sense to me uh but we also have a better answer for this what is Landgraf question. So this heartens me, right? I'm feeling better. Like maybe this will be a better system. And before you might have to just look at it and be like, yeah, it feels better. But now with RAGUS, we can go ahead and just evaluate. We're going to do the same process we did before by cycling through each of the questions in our test set and then getting responses and context for them and then we're going to evaluate across the same metrics you'll notice that our metrics uh have definitely changed so let's look at a little bit more closely how they've changed so it looks like we've gotten better at our faithfulness metric we've gotten significantly better at answer relevancy which is nice we've gotten a little bit better at context recall. We've taken some small hits, a small hit on context precision, and a fairly robust hit on answer correctness. So it's good to know that this is going to improve kind of what we hoped it would improve. And now we are left to tinker to figure out how would we improve this or answer correctness doesn't get impacted by this change, but at least we know in what ways, how, and we're able to now more intelligently reason about how to improve our RAG systems, thanks to RAGIS. And each of these metrics correspond to specific parts of our RAGIS application. And so it is a great tool to figure out how to improve these systems by providing those directional changes. With that, I'm going to kick it back to Greg to close this out and lead us into our Q&A. Thanks, Wiz. Yeah, that was totally awesome, man. It's great to see that we can improve our rag systems not just sort of by thinking about i think that's better uh land graph question got answered better but actually we can go and we can show our bosses our investors anybody that might be out there listening hey look we have a more faithful system check it out went from base model to multi-query retriever and improved our generations. Of course, as developers, you want to keep in mind exactly what the limitations of each of these things are. But for all of those folks out there that aren't down in the weeds with us, if they really want an answer, here's an answer. And so it's awesome that we can go and take just things off the shelf that we're trying to qualitatively analyze before and directionally improve our systems by instrumenting them with RAGIS and measuring before and after small iterations to our application. So today we saw Langchain v0.1.0 to build RAG, and then we actually did RAG on the Langchain v0.1.0 blog. Expect stable releases from here. It's more production ready than ever. And you can not just measure faithfulness, you can measure different generation metrics, different retrieval metrics even different end-to-end metrics and big shout out to everybody today that supported our event shout out to langchain shout out to ragas and shout out to everybody joining us live on youtube with that it's time for q a and i'd like to welcome Wiz back to the stage as well as Jithin and Shaul from Vragus, co-founders and maintainers. If you have questions for us, please scan the QR code and we'll get to as many as we can. Guys, welcome. Let's jump right in. Hey guys. Hey. What's up? All right. Let's see. I'll go ahead and toss this one up to Jitin and Shaul. What's the difference between memorization and hallucination in RAG systems? How can developers prevent hallucinated content while keeping the text rich. Yeah. You want to go for it? I know I didn't actually understand what you actually mean by memorization. Yeah. Oh, yeah. OK. You want to take a crack at this, Shaul? Yeah, I mean, what is the difference between memorization and hallucination rack systems? That's it. The line between memorization and hallucination, I don't know where to draw that particular line. It's something seems like, seems like what it meant is the usage of internal knowledge versus you know there are situations in drag when knowledge is a continually evolving thing right so maybe the llm thing that a person is you know is still alive but the person died yesterday or something now the now if if that particular thing is uh is read using wikipedia or something there will be a contrasting knowledge between the LLM and what the ground truth Wikipedia sees. Now, that can be hard to overcome because the LLM still believes something else. So it's a hard to crack problem and I hope there will be many future works on it. But how can we prevent such hallucination? The thing is, what we require is when using LLMs to build RAC, we can align LLMs so that LLMs answer only from the given grounded text data and not from the internal knowledge. So, or there must be high preference to the grounded text data compared to what is there in the LLMs internal knowledge. So that can be one of the situations. Yeah, definitely. Wiz, any thoughts on memorization versus hallucination before we move on here? I think the answer to the question was already provided uh basically really i mean yeah yeah we when it comes to the memorization versus hallucination i think the the most important thing is uh you know memorization is that you could maybe frame it as a slightly less negative form of hallucination because it's likely to be closer to whatever the training data was. But in terms of RAG application, both bad. We want it to really take into account that context and stick to it. Okay. We've got a question from Jan Boers. I'm curious if you already have experience with smart context aware chunking. Can we expect significant improvements of rag results using smart chunking? What do you think, Jitin? Is this something that we can expect improvements in? Yeah, so how you, so one thing that we see when we're building rag systems is that how you're formatting the data is where most of the problems are. Like if you take some time to clean up the data and to format the data is like where most of the problems are like if you if you take some time to clean up the data and like to format data that actually makes it easier for your act the performance difference like like really great because like models right now if you're using a very stable model if you provide with the correct context the model will be able to use the information in the context to get it so all these tips and tricks to optimize about even like um chris was using the multi uh context method right it's also another trick to get make sure that you get different context from different perspectives into the final answer so all these different types of tricks can be used and this is actually why we started this also we wanted to like evaluate all the different different tricks that are there out there and try to see which works best because it can be different on your domain. So yeah, smart chunking is smart. Yeah. So you're saying that it actually matters what data you put into these systems just because they're LLMs, it doesn't solve the problem for you? Yeah. That actually matters a lot more because what goes in comes out. So that's important that you format your data. That's right. The data-centric paradigm has not gone anywhere, people. You heard it here first. Garbage in, garbage out. So Matt Parker asks, maybe I'll send this one over to Shaul. Can you compare TrueLens and RAGAS? This is the first I've heard of TrueLens. Maybe if other people have, and maybe you can tell us a little bit about what they're doing and what you're doing and the overlap you see. Sure. Yeah, TrueLens has been around for a while for evaluating ML applications, and they are also doing a lot of applications. So RAGAS currently is mostly focused on racks as in we wanted to crack the application that most people care about that is racks. And so we are mostly, you know, doing things that can help people to evaluate and improve their racks. We are not building any UI. We are largely providing for the integrations part. We are largely interested in providing integrations to players like Langsmith so that people can trace and see their UI rather than building a UI on top of Raga. So Raga mainly offers metrics and features like as you have seen, synthetic test data generation to help you evaluate your racks. I don't think TrueLens has a synthetic data generation feature, which is something that most of our developers really liked because it has saved a ton of their time because nobody really wants to go and label hundreds of documents of documents it's a boring job right so we are trying to double down on these points that we have seen that developers really like and we are trying to stay true to the open source community as well nice okay very cool very cool rad asks I'll send this one over to you, Wiz. Can you combine multiple query retriever with conversational retrieval chain? Sure. Yeah. Basically, Langchain works in a way where you can combine any retriever inside of any chain, right? So a retriever is going to be some kind of slot that we need to fill with something. So if you want to use a more complex retrieval process or combine many different retrievers in an ensemble, you can do that with basically any chain. Basically, that conversational retrieval chain is looking for a retriever. And so as long as it can be accessed through the retrieval API, it's going to work fine. retriever. And so as long as it can be accessed through the retrieval API, it's gonna work fine. I would I would add though, conversational retrieval chain, you'll want to use the 0.1.0 version, which is, you know, been implemented with LCL. But other than that, you're good to go. Okay, okay. And sort of back to this idea of sort of smart, chunking, smart hierarchy of data. Is there sort of like, we often talk in our classes about this sort of black art of chunking. Everybody's like, well, what's the chunk size I should use? What's the chunk size? So Sujit asks, and maybe I'll send this one over to you, Jithin, I know the chunk size matters. Are there like guidelines for chunking that you guys are aware of or that you recommend when people are building rag systems? Yeah, so I don't have like a very good guideline. Maybe Shahul can take back it up. But one thing that I've like seen like personally from experience is like, so A, do the evaluations, but then B, like also making sure that you get, you combine like multiple, like, so you basically, you create a hierarchy system where you have like different chunks. Then you summarize the different like concepts, like define the, uh, summarize the different channels so that, uh, even like all the beer, like core ideas are there in the hierarchy that actually has been like very like helpful. So, yeah. like core ideas are there in the hierarchy that actually has been like very like helpful so yeah so exactly like chunking size i haven't seen it in the uh like matrices as such um but all the like all the recursive like summarization that has helped and i think uh lament x has like uh a few retrievers right there what shall what do you think? VARSHAAL KUMAR- Yeah, just adding some more points into it. I think there is no one size fits chunk size that fits all type of documents and all type of text data. So it's a relative thing that should either you get. So there are two ways to handle this problem. Either you can, the general rule of thumb is to ensure that enough context the context makes sense even without any you know as as an individual you know as an individual chunk it it should make con if it should make some sense if you read it if a person writes it so how to how to achieve this you can achieve this either using writing a set of heuristics or let's say you know it can be something like okay determine the document you know type or something and change it using that and i think the from moving from heuristics to where we are going i think we might even see smaller models smaller very smaller models that are capable of chunking determining the chunk boundaries smartly so that you don't really have to rely on the heuristics it's more a generalizable way of doing it so I think that's where we are going in in the future um of chunking and uh hopefully the problem gets solved like that yeah yeah yeah I really like this idea of making sure each individual chunk makes sense before sort of moving up a level and thinking about, okay, what's the exact, you know, hierarchical parent document, multi-equal, like whatever it is that you're doing, each chunk should make sense. And that's going to be dependent on data. Yeah. I really liked that. And okay. So let's, let's go ahead and sort of related to that, I wanna go to this embedding model question in the Slido from Ron. It's similar in sort of relation to this chunking idea. I mean, people always want the answer, you know? So what chunk size? Here, Ron asks, which embedding models should I be using when I develop a system? Any emergent models or techniques that I can see significant improvements with? Maybe Shaul, if you want to continue here. Sure. Again, there is no one fit size for this answer. You know, the thing is that, again, it depends on a lot of factors. So if you don't want to really you know use again first first you know question will be open source or closed source you have like a lot of open source players even revealing open a with their open source models like i think recently uh by uh alibaba group uh released their m3 embedding which is like awesome it's like most powerful open source embeddings which we we have ever seen uh even revealing open is at our buildings right so it's it's a set of questions that you have to answer if you want to go for easy way of building a baseline rag of course open is embeddings you know good place to start you don't have to worry about anything else then you you can iteratively improve it that's where also ragas comes in let's say you have now you have an abundance of embeddings to choose from right so now you have you want a way to compare it so you don't use ragas you know you can just compare all these different embeddings choose the one that fits you and you're done there it it is. There it is. Just closing up this topic on chunks and embedding models. Wiz, I wonder, why did you choose Ada? Why did you choose, what is it? 750 overlap. Any particular reason? Zero thought put into those decisions. We used Ada because it's the best open AI model that's currently implemented. And we used 750 because we did. Basically, we wanted to show that those naive settings are worse than a more considerate or a more mindful approach. And so to do that, we just kind of selected them. I think the thing I really want to echo that we've heard so far is when we're thinking about our index or we're thinking about our vector store, we really want to be able to represent individual like quanta of information. And so the closer we can get to that, the better it will be. And then we can add that hierarchy on top. And I think what was said about, you you know using models to determine that at some point is definitely a future we can uh we can imagine we'll be living in soon yeah yeah and i think again we go back to this data centric idea it's easy to get the rag system set up and to get instrumented with aragus but like you're gonna get the improvements you're gonna get the thing really doing what you need it to do for your users by doing the hard kind of boring data work data engineering data science on the front end that really you just can't outsource to ai and you just have to kind of deal with yourself okay one more sort of like what's the answer question. I want to maybe send this one to Jithin. If somebody is picking up ragas and they build a rag system and they're like, okay, well, which ragas metric should I use? You know, which one should I look at? Right. What would you say? Is there, is there a starting point? Is there a sequence that you'd look at? Or the jury's not out yet on this. VATSAL SHARANAMANIYARANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANANAN And then just like first of all, the first one, just try out like with all of the stuff, like basically like once you know which component, like what figuring out which component work, how, like what the state of which all these components are gives you an idea of, okay, where can I make an improvement with like as fast as possible? If, if your generator is bad, maybe try out a few other other LLMs or maybe if your retriever is bad, then figure out okay, in the retriever part, what is actually happening is context relevancy? Is it is it the recall that's bad? And like that is the way so starting off try out, try out all the metrics that you have. And then for the ones that are the bad the worst. And like after you understand like what the metrics are, you will get an idea of how you could like what other stuff you can actually try out to improve it and if it's like try out the easiest part like cross out the low-hanging fruits first and that is how you would like over time like progressively like uh improve it like but like i said it's not the absolute values that matter it's like the trends that matter right so you guys did a good job in explaining that so make sure like you go for the easiest things that you can patch up fast and keep that trend in the upward direction. Yeah, yeah, I love it. If you're getting low retrieval metrics, maybe pay attention to some retriever stuff. If you're getting low generation metrics, maybe try a different model. It's like, yeah, it's so simple when we can break it down like this. And you know, just a shout out to everybody out in Manny, just shouting out to Manny. That was kind of an attempt to answer one of your many questions today. We'll see if we can get some more on LinkedIn, but I think this idea of like getting your system instrumented so you can start to look at and chunk up different pieces of it and try to improve them. There's a lot of content that needs to be made on this. These guys are open source first, open source forward. We'd love to see some folks in the community start to put some guides together for how to actually break down and use RAGUS in sophisticated ways. So last question, guys, we're at time here, but what's next for RAGUS in 2024? Maybe if either of you wanna go ahead and take take this go ahead and take it let us know what to expect from you guys heading forward this year yeah shall we we want to take this yeah yeah that's a tricky question so you want to go where the community takes us so yeah doubling down on um things like synthetic data generation there are there are a lot of interests there there are a lot of interest in expanding ragas to other llm tasks as well so yeah there are all these interesting directions to take hopefully uh you know we'll get more signals from the community on which path so to take i mean we do have a lot of directions a lot of feature requests coming in so we have to just you know take that decision and move on but uh but yeah as of now um the the synthetic test generation is something that gets a lot of interest we want to you know make it very stable very useful make sure that that we push the limits of you know uh the the closed source models and plus frameworks analogy uh to build a great uh you know test data point that's that's very easy and uh easy to use yeah yeah anything to add yet then yeah like honestly like so right now we have a good base right now we're like very curious what like what we can do like evaluation driven development what are the extremes of that so like curious to see like what like uh what the community comes up with what like like you guys can like we come up with so yeah excited really excited for that yeah yeah let's see what everybody builds ships and shares out there and uh and contributes well thanks so much jiten thanks shaul thanks Wiz. We'll go ahead and close it out for today. And thanks everybody for joining us. Next week, you can continue learning with us. We're talking alignment with reinforcement learning with AI feedback. If you haven't yet, please like and subscribe on YouTube. And if you haven't yet, but you liked the vibe today, think about joining our community on Discord, where we're always getting together and teaching and learning. You can check out the community calendar directly if you're not a Discord user to see what's happening this week and in upcoming weeks. And finally, if you're ready to really accelerate LLM application development in your career or for your company, we have a brand new AI engineering bootcamp that's going to cover everything you need to prompt engineer, fine-tune, build RAG systems, deploy them, and operate them in production using many of the tools we touched on today, but also many more. You can check out the syllabus and also download the detailed schedule for more information. And then finally, any feedback from today's event, we'll drop a feedback form in the chat. I just want to shout out Jonathan Hodges as well. We will get back to your question and we will share all the questions today with the RAGUS guys to see if we can get follow-ups for everybody that joined us and asked great questions today. So until next time and as always keep building, shipping and sharing and we and the RAGUS guys will definitely keep doing the same. Thanks everybody. See you next time.", "title": "RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1", "duration": 3842, "uploader": "AI Makerspace", "upload_date": "20240207", "description": "GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evaluating and improving production Large Language Model (LLM) applications. With the rise of open-source evaluation tools like RAG Assessment (RAGAS) and built-in tools in LLM Ops platforms such as LangSmith, we're uncovering how to quantitatively measure and enhance the accuracy of LLM outputs. Discover how Metrics-Driven Development (MDD) can systematically refine your applications, leveraging the latest advancements in Retrieval Augmented Generation (RAG) to ensure outputs are factually grounded. We'll start with creating a RAG system using LangChain v0.1.0, assess its performance with RAGAS, and explore how to boost retrieval metrics for better results. Don't miss this deep dive into overcoming the challenges and understanding the limitations of current AI evaluation methods, with insights from our partners at LangChain and RAGAS. This is your opportunity to learn how to build and fine-tune RAG systems for your LLM applications effectively!\n\nSpecial thanks to LangChain and RAGAS for partnering with us on this event!\n\nEvent page: https://lu.ma/theartofrag\n\nHave a question for a speaker? Drop them here: \nhttps://app.sli.do/event/2rLa8RML994YsMQt1KLrJi\n\nSpeakers: \nDr. Greg, Co-Founder & CEO\nhttps://www.linkedin.com/in/greglough...\n\nThe Wiz, Co-Founder & CTO\nhttps://www.linkedin.com/in/csalexiuk/\n\nJoin our community to start building, shipping, and sharing with us today!\n https://discord.gg/RzhvYvAwzA\n\nApply for our new AI Engineering Bootcamp on Maven today! \n https://bit.ly/aie1\n\nHow'd we do? Share your feedback and suggestions for future events.\nhttps://forms.gle/ryzhbvxZtbvQ4BCv5", "datetime": "2024-06-09T21:02:21.809985"}
|
train/transcriptions-aa17bcdb-6deb-4f6f-8490-a6d37bceb350.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/live/XOb-djcw6hs", "transcription": " Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? It sure is, Greg. Yes. And is quantization like really as good and as dope as everybody's talking about? Yes. Emphatically, yes. Emphatically, yes. Man, I cannot wait to see exactly what's going on inside. You're going to show us how to do this today, right? Sure. All right. Let's go ahead and get right into it, man. We'll see you back in just a little bit. Today, we're going to talk quantization. I'm Greg. That's Chris. We're from AI Makerspace. This is a bit of an add on to last week's event, which talked about parameter efficient fine tuning and low rank adaptation. Today, we're gonna take it to the next level and talk quantization. We'll demystify the idea of quantization, and we will also talk about how to leverage the latest in low ink adaptation which is a quantized version of it called QLORA as always we'll be collecting questions with slido so go ahead and provide your questions for us throughout the day at that link and then we'll go ahead and answer as many as we can when we're through with the demo at the end. Of course, we'll have Chris back to lead and wizard his way through the demo on quantization soon, but for now, let's cover what we need to know so that's going to make sense to us. We're going to talk quantization of LLMs today, and we're going to talk fine-tuning with LoRa. This is the main goal. We want to understand and we want to align our aim to really grokking QLoRa and then seeing how we can implement that. We got a little bit of insight into quantization last time when we were loading the model but now we want to take a look at how it can be used to fine tune and some of the background and intuition associated with why this works and what the industry has sort of learned about the precision of numbers within our llms so we're going to talk-tuning quantization QLORA, and then we'll do it. And to sort of contextualize this, similar to last time, we wanna understand that often fine-tuning is coming into play after we do prompt engineering, often after we set up a retrieval augmented generation system. And we wanna now take a look at how we can optimize our large language model, or in other words, how we want the model to act, how we want the input and output schema of the model to be a little bit more constrained, a little bit more dialed in, a little bit less large, a little bit more small. And this is sort of the trend we're noticing as 2024 is upon us now. We are seeing a bigger and bigger interest in smaller, more performant language models and fine tuning is really a key aspect that's going to help us to get there so let's just remind ourselves what we talk about when we talk about fine tuning with peft laura PEFT LoRa. And why we need to do this. You know, when we talk LLMs, they're super big. They have billions and tens of billions of parameters. It's likely we'll see models with hundreds of billions of parameters before too long. Not all models are always getting bigger, but some of them are. And the reason is, is because if we keep adding more text and more parameters, we are pretty confident that our next word prediction will continue to improve. prediction will continue to improve. But as we do this, as we build larger and larger models, as we have to deal with more and more compute in order to be able to handle them, whether that's loading them, training them, fine tuning them, or performing inference on them and serving them. We're kind of abstracting away from the regular developer, the regular individual out there that doesn't have access to a giant cluster of GPUs to be able to even play with these things. And this is the core problem, is that when we go and we want to do full fine-tuning on many, many billions of parameters, this becomes a huge pain for anybody trying to use consumer hardware, any small business trying to just use the laptops that they have, maybe a few resources on the cloud. And this is as true for fine tuning as it is for loading and storing, certainly for deploying these models. It just costs too much. And the solution for kind of dealing with the fine tuning, the storing and the deploying is kind of the same. But today we're focusing on fine tuning. Today we're focusing on fine tuning using fewer parameters. It's all about using fewer parameters. We don't need all of them as we started to get some intuition into last time. And in fact, the ones that we have, what we're going to do today is we're going to take those parameters and we're going to make them smaller in a sense. We're going to make them smaller in a computational sense. This is the essence of quantization. So while it may not be necessarily fewer parameters when we talk about quantization, although it often is when we talk about fine-tuning, we're just trying to move these big, big, big models towards smaller packages through fewer parameters and through more efficient representation of those parameters. And we saw last time, we saw that LoRa is the number one PEF method you should know. It's called low-rank adaptation. And the big idea of LoRa, as we discussed, was to fine-tune using factorized matrices. And again, we didn't focus on fine-tuning absolutely everything. We did fewer parameters. That was great because it was more efficient. And we found out that we could actually leverage LoRa adapters for many tasks. So you could have one big, big model and a ton of different lower adapters and deploy that to production. Deploy each of those adapters to production because at inference is when the adapter would actually come into play. So very, very flexible, very good technique for. Larger companies and industry, especially that want to just have many adapters and larger companies and industry, especially that want to just have many adapters in one very powerful model, we'll probably start to see this emerge as an approach to AI development in the enterprise. And, you know, it's really comparable to fine tuning, full fine tuning. full fine-tuning. So, you know, we saw, in essence, that fine-tuning is all about modifying behavior of LLMs to update parameters. Parameter-efficient fine-tuning is all about fine-tuning with fewer parameters. Low-rank adaptation was all about fine-tuning using factorized matrices. And so parameter-efficient fine-tuning through low-rank adaptation is all about modifying behavior by updating fewer parameters using factorized matrices. So this all sort of flows together. This leads us directly to our new friend, quantization. And this meme is so good, I had to put it twice, because it's such an oft misunderstood idea. Certainly has taken a long time for me personally to really try to grok this thing. So let's see if we can break it down in a way that makes sense to all of you. First off, the weights of our LLM, when we talk about weights, it's the same thing as when we talk about parameters. Okay. So parameters, I might say weights, we're still talking about parameters. Those parameters are simply numbers. They're just numbers. And specifically, they're floating point numbers, They're floating point numbers, also known as floats. And it's important to understand a little bit of the detail here, because this is the essence of what we're doing in quantization. When we talk about floats, you may harken back to your days in school, maybe chemistry, back to your days in school, maybe chemistry, where you learned about significant figures, sig figs, everybody's favorite, right? And then if you're like me, you become an engineer and you don't care anymore, ever again. But I was a mechanical engineer. If you're a computer scientist, computer engineer, maybe you continue to go deeper. And these days in AI, if you're a developer, you need to continue to go a little deeper. Because this idea of a float is cool, this integer with a fixed precision, we can talk about representing, for instance, 12.345 as 1, 2, 3, 4, 5 times 10 to the minus 3. And we can then do this by using a specific number of bits in our computer. When we talk about this precision, this fixed precision, there's a number of different types of precision. What we're going to generally be using is what's called full precision when we're doing computations that are kind of default computations. Full precision means that I have 32 bits to represent my floating point number. And they're broken up into a couple different pieces here, but the big idea is that there's 32 bits. And the question is, is that the right amount when we want to go and deal with 70 billion parameter models and things like that? And it turns out that in machine learning, we found sort of over time through experiments that if we didn't use 32-bit precision and instead we used 16-bit precision, Instead, we used 16-bit precision, essentially half precision, to again, simply represent those decimal numbers that are inside of each of the neural network, that represent each of the neural network weights, sort of each of the neural network perceptrons is a way you could think about this. Then what we're finding is that we can get almost identical inference outcomes from our LLM. Because remember, we just want the words that come out at the end. We just want the ideas that come out of that. We just want the outputs. We don't necessarily care about the precision of the stuff within the black box. We put in, we get out. And a lot of people were seeing this. A lot of researchers were seeing this with the large language models, that if we just leveraged half precision we can get very very good outcomes and what this does is this effectively halves the entire model size so what are we saying we're saying that we can sort of get exactly the same thing coming out coming out, even if we represent each of the model weights using half as much information we can think about. Because really, I mean, how many sig figs do we need? And another way we can talk about moving from a 32-bit down to a 16-bit representation is we can say we are quantizing. We quantize the 32-bit weights down to 16-bit. weights down to 16 bit. Hence quantization. Now, when it comes to quantization, there are many different approaches to quantize model weights. So, this is very important. We're not going to cover every single approach because that's not really necessary for what we want to discuss today. But there are many different ways to quantize model weights, and we hope to continue to bring you more content on ways that are a little bit different in terms of their actual implementation and the nuances in future content but for today we're going to focus and use this Q-Laura idea as a focusing lens now Q-Laura starts the story begins with a paper called 8-Bit Optimizers via Blockwise Quantization. And this was a paper that came out of the University of Washington. Tim Detmers was the lead author, and he's been quite a superstar in the field.'s he's kind of like the quantization guy and in this paper they showed that you can use 8-bit representations and maintain performance that we're seeing at a level of full precision or 32-bit. So here we see in this kind of early paper, again, one of these pieces of work where they're saying, hey, look, experimentally, we're seeing that if we reduce the precision, we can still get great results. And this is not reducing it to half precision it's reducing it to quarter precision 32 down to eight and this bits and bytes approach this bits and bytes paper turned into what became the bits and bytes library which has since evolved and is something that we'll see the Bits and Bytes library, which has since evolved and is something that we'll see Chris use today, and it's something that gets used all the time now. Now, Bits and Bytes, you can go ahead and recall that one byte is equal to eight bits. We're going to continue the discussion in bits today, but you'll see many papers and discussions of things that will talk in bytes as well. So pretty simple to understand why the library was named bits and bytes. Now, again, this is one approach. And so there are some trade-offs as there are with any approach. For instance, when we use the bits and bytes approach to quantization, we're not really getting any additional benefits to our inference latency. We're not really speeding up inference a whole lot by using this particular approach to quantization. However, what we are doing is we're leveraging a tool that gives us very flexible use of those LoRa adapters, right? So for enterprise, if we're thinking about how do I have one big model and just have a bunch of adapters, this is going to be our friend. And this is why we choose to focus on this one today. And this bits and bytes library forms the basis for what comes next. It kind of forms the basis for this QLORA idea, this efficient fine-tuning using quantization. And the fine-tuning using quantization from the QLORA paper, the big bang box takeaway of this is it's super great, even though it's eight times less precise. less precise. So what we actually have going on in QLORA is we have not an 8-bit representation, but we have a 4-bit representation. And so what completely insane. And we can fit all of that on a single 48 gig GPU, single 48 gig gpu which is like just kind of incredible it's just kind of it's kind of mind-blowing that we can do this and so this q laura paper is essentially coming and saying hey hey, listen, we've got this idea that we can do fine-tuning using a four-bit approach versus even a half-precision approach, and we get amazing results. And so this is the essence of what's going on here with QLORA. the essence of what's going on here with QLORA. And so what we can kind of think about is if we go back to this idea of PEPF-DLORA fine-tuning, where we're modifying behavior by updating fewer parameters using factorized matrices. And we add this idea of quantization, where quantization is simply representing high precision numbers with low precision. Then we get to this place where we talk about PEFT-QLORA fine-tuning, where we talk about PEFT QLORA fine-tuning, where we're modifying behavior by updating fewer quantized parameters using factorized matrices. And so the process as outlined in the QLORA paper and the process that you're going to see today is something like this. We download the model weights. Anytime you download model weights from Hugging Face, they're always going to be in full precision, 32-bit. Then we load our parameter efficient fine-tuning model into GPU memory. Anytime we load into GPU memory for inference or training, we're going to be loading using that parameter efficient fine tuning method. And then we'll initialize our low rank adaptation, our LoRa configuration. And finally, and this is the key, this is the key to the whole thing, is that during training, what happens is we have the full precision 32-bit model, and we're going to actually load the 4-bit model, quantize 32-bit down to 4-bit, for training. Quantize 32-bit down to 4-bit for training. Now, during training, we're going to flow through the network, and we're going to, as necessary, each time we have to do a computation, each time we have to calculate something during our training process, we're going to de-quantize that 4-bit representation back up to a 16-bit half-precision representation. We're going to do the calculation, and then we're going to re-quantize back down. And at each step of our training or fine-tuning, we're going to quantize, de-quantize, move on. So we're never holding that half precision fully in our GPU memory. But rather, we're simply using half precision to do the calculations. This is the magic of what's really going on behind the scenes. And it turns out this works incredibly well. And again, that intuition behind the 16-bit piece is that we saw that for inference, you can go from 32- down to 16 bit and get very very good results we saw this experimentally over a lot of time not just papers from the university of washington but also papers from many other researchers and this q laura approach fundamentally Fundamentally, is to load those full precision weights into GPU memory as quantized 4-bit weights. And then only de-quantize up to 16-bit during calculation. Back down as it moves through. All right. So this is the core approach that we're going to see today. You're going to see things like this. This is the bits and bytes configuration. And you'll notice when we want to load in, we want to load in in 4-bit. You're also going to see a data type called NF4. Chris is going to talk a little bit more about it. It's very important. It's very essential to the QLOR approach. And that's it for the big ideas we need to really see how this build can be taken to the next level. So what we wanna do is we wanna take the same build that we've already looked at, the old UNO reverse card build, given the response, predict the instruction. We want to use the same model that we saw last week because it's still one of the best out there. Mistral 7B instruct V0.2. And we're going to use the same data for fine tuning. Just keep everything simple. That Alpaca GPT-4 data set is there. So again, output response, predict input instruction. And with that, we're ready to kick it back over to Chris, the wizard, to show us how to do fine tuning with PATHQ, Laura, and fill in some additional details. Wiz, over to you, man. Q Laura and fill in some additional details. Wiz over to you, man. Oh yeah, thanks Greg. Really appreciate it. And guys, I'm excited because quantization is definitely one of my favorite topics. It is the kind of like one of the best things we could do right now. And as you can see, we only used around 20 gigabytes of GPU RAM to train this 7 billion parameter model, which is quite impressive in my lens. That includes fine tuning. In any case, we'll get right into it. First of all, we're going to be using Mistral 7B Instruct V02. This is just Mistral's most recent Instruct tune model. I love it. And we're going to now move on from PEFT, which we discussed last week, into the Q in QLORA. So we discovered or we discussed, you know, the idea of how we can reduce the number of parameters that we train. But now how do we reduce the size of the parameters that we train? Now, first of all, what is quantization? Greg already talked us through it. I'm going to give a brief overview here of what's happening under the hood, and then we'll get into how to implement it in code. Spoiler alert, it's super easy. Thanks, bits and bytes. But let's look at what quantization is from this perspective. So quantization is a process of discretizing an input from a representation that holds more information to represent a representation with less information right that's crazy so the idea is we want to express more information with less information so how do we actually do that well in the tim detmer's q laura paper they rely on this process called blockwise k-bit quantization which which sounds, you know, like, very, you know, scary, but it's not so bad. It relies on two very important things. One, it relies on the fact that in neural networks, the model weights are mostly normally distributed. So as soon as we, if you're, if you're coming from a stats background, as soon as you hear that word normal distribution you you know your your eyes should light up uh you know we're we're going to be able to make use of a lot of very clever tricks uh to help us do whatever we're trying to do um and then it also relies on this idea of the nf4 format which which is a number format or data type created by Tim Detmers and team, which is information theoretically optimal. Now, not literally, it was proven this is not literally true, but it is empirically, for all intents and purposes, this is a fact that NF4 is very, very efficient, which is excellent. So how does this work behind the scenes, right? So, okay, we get it. Model weights are normally distributed. That's great. So what we're going to do is we're going to essentially put a pin in the number line that is near to the mean, right, of our desired numbers, which are going to be in a distribution. And that distribution is going to be normal, right? And then we're going to kind of use that mean as a zero point. And we're going to use this NF4 data type, which is a zero centered number format to represent the numbers that appear around that specific point in the number line. So there's a step that needs to take place here. We're going to normalize all of our numbers to be within a specific range of minus one to one. And then we're going to be able to have this idea of a saved place on our number line that we're going to understand a range around. And that's really about it. Now, it's a bit simplified and it's definitely, you know, you can look at the paper for the math. It's great. But the idea is that we have, we kind of drop a pin in the number line and we have this NF4 number format, which represents a range around that point to the number line. And that is what's going to build up the buckets or bins that we're going to use to represent our numbers. And the reason this works so well is again, because of the fact that model weights are normally distributed and because this is an informationally, theoretically optimal data type for that minus one to one range. So this is specific Yennefors for that minus one to one range for normally distributed, to one range. So this is specific, the n of four is for that minus one to one range for normally distributed, well, distribution. So that means the only reason this works is because of this first fact, right? Now, beyond just that, QLORA does an extra step. So you might have thought to yourself when I said drop a pin in the number line, right? Well, okay, if we drop a pin in the number line, that's all well and good, but doesn't that mean that we have kind of like a high precision number, right? It doesn't have to be as high precision perhaps, but it's definitely still high precision. And that's true, right? That pin we drop is high precision. Well, it can be used to represent many numbers. In this case, you know, 64 numbers from the QLORA paper. So each pin is associated with 64 numbers. Tim Demers and crew said that's not enough. You know, that's going to give us 0.5 bits per parameter of overhead, right? So we need to go bigger. So what they did is they actually took all of those quantization constants. That's the technical term for that pin that we're dropping, right? We take those quantization constants, and then we also quantize those. So we represent our quantization constants in an 8-bit format, and we do 256 of those for every 32-bit precision number. So we have one 32-bit precision quantization constant that sits on top of 256 8-bit quantization constants, which sits on top of each of those sits on top of 256 8-bit quantization constants, which sits on top of, each of those sits on top of 64 4-bit. So you can see the savings in terms of memory here is insane, right? We're able to represent so much of our data in that 4-bit representation. And we're also able to do it in a way that retains a ton of information. And that is key. I saw some questions in the YouTube chat kind of concerning, you know, what's the trade-offs here? What's the performance gains? And there definitely is some when it comes to latency. We'll discuss those as we move through the rest of the notebook. But in terms of the actual effectiveness of the model, the performance hit can be very small. It is not zero. There is a performance hit, but it's incredibly small, which makes this a very effective technique, especially when applied in the way we're going to see it applied today. So that's basically what we're talking about when we talk about this idea of QLora, right? We're talking about dropping a pin on the number line and then saving kind of numbers or representing numbers around that and then doing that one more step abstracted which is harder to visualize but there it is okay so how do we do it in code now right uh well first of all we gotta load our our kind of familiar usual suspects here so we're bits and bytes data sets accelerate uh the laura lib library transformers and peft uh these are all kind of staple libraries we're bits and bytes data sets accelerate the Laura lib library Transformers and peft these are all kind of staple libraries we're going to be using when we're using these uh kind of Q Laura tools and then we're going to grab our model and the model we're going to grab is the Mistral AI Mistral 7B instruct v 0.2 it's the most recent uh instruct model for Mistral it's a great one and then this is kind of uh you know where the magic happens this is the bits and bytes config uh this is from the bits and bytes library we're gonna see that we load in four bit so this means when we actually move our model from those saved weights uh that exist on our on our drive, when we load those into our GPU, we're going to load them in that four-bit quantized state, right? So that's that collection of numbers and then their quantization constants and then their quantization constants because we're using this use double quant, right? If we omitted that use double quant, we would only do one step, and then we would be saving less effective memory. We're also going to be using the quant type of that NF4 I talked about. That's the Tim Detmers and crew created number type, which is information theoretically optimal. Again, not literally true, but it's close enough, so we'll keep saying it. And then we're going to have this idea of a compute D type, which is going to be torch B float 16. Now this is very important, right? So when we store numbers in 4-bit, that's awesome. But when we try to compute with them, it's really bad. It's actually quite bad, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? We usually wind up with a number that is relatively needs more precision to fully accurately understand it, right? When we divide 100 by 1000, we wind up with a very, you know, a small number. And the idea is that we'll need more precision to represent that very small number. So what we do with the QLORA approach is we actually de-quantize whatever we need to compute with our weights. Now, this is done at a per-tensor level. So we never have the full model de quantized in memory, just one tensor at a time, right? So this saves us a ton of a ton of space. And it also lets us have the ability of computing as if we have this model in that higher precision or B float 16 format, right? Which is huge. So we're saving tons of space and then we're de-quantizing. So we also retain some of that compute precision. And that is what lets this method really shine, right? The fact that we de-quantize for computation and then we store in 4-bit. I think without that, this would be a less powerful method. But with that, it's amazing. You can choose up to full precision here. Obviously, that is going to come with some small memory overhead. You do have to upcast a tensor to the full precision, but it's negligible compared to the size of the model. And it does also, and this is critical, it does come with some inference and training latency overhead, right? The fact that we have to de-quantize and re-quantize, de-quantize and re-quantize, this means that we're performing an additional operation per computation. And so that is going to impact inference. Now, Tim and team have written some great kernels for this. So it's not very slow, but it is going to be slower than if we weren't doing that extra operation. And so this is one of the key trade-offs, right? We had questions about trade-offs. One of the key trade tradeoffs with Qlora and with the bits and bytes approach is that it is extraordinarily flexible. It is very powerful and it works very well with a PEFT adapter methods. So like LoRa and others, but it does cost us a little bit of inference latency in training time. So that's important to keep in mind. Once we have our bits and bytes config loaded, all we have to do now is just load our model like we normally would. So auto model for causal LM from pre-trained. We're gonna pass in our mistral AI model. We're gonna pass in our quantization config. We're not gonna need the cache and we're gonna map this to auto, which is gonna shove as much as it can into our GPU. In this case, again, because the actual model loaded only takes up about 15 gigabytes of GPU memory, it's all squeezed into the GPU there. So that's great. We do some pre-processing on our tokenizer to make sure that it's set up in the right format for training. And then we can look at our model architecture. You'll notice that we have this four-bit layer, right? This four-bit layer is where that bits and bytes comes in. You'll see that we have the four-bit layer on our QKVO proj as well as our MLP. So it's all four bit, all the way down. This is the idea, right? We don't want to just quantize some of the model. We're gonna quantize as much of it as we can. However, you will notice that we omit some of the layers, specifically we omit our layer norms. And the reason we omit our layer norms is we know that our layer norms. And the reason we omit our layer norms is we know that our layer norms are going to tend to a very, very small number, you know, near zero. And we're going to run into some training instability issues if we use lower precision to represent these layers. So we're actually going to keep those in full precision. Now they're very small compared to their weight matrix counterparts, but we do want to make sure that we're keeping those layer norms in a higher precision. This is to avoid training instability issues, right? If we have these numbers kind of diverge and cause a ruckus, right? We're not going to be able to train very well. And so that's why we don't see those four-bit layers here. Now that we have our model loaded, we can see that it's in four-bit. We're very happy about that. It's time to peftify it. We talked about peft last week, so we're not going to spend too much time on it today, but the idea is fairly straightforward. We are going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to set our alpha, which should be by conventional wisdom, about twice your rank. Though you're, you know, again, it's always worth doing hyperparameter searches here to make sure you have the most optimal hyperparameters. Your LoRa dropout, pretty consistent value. Your bias is none. Task type is causal, because that's what we're doing. You'll also notice that we have our QVK proj modules. We, again, with QLoRa, we want to target as many modules as we can, right? The QLoRa paper's wisdom is that we should actually target all possible layers of LoRa. In this case, we're just going to leave it up to PEFT to simplify things a bit for us. For our base model, all we have to do is prepare our model for k-bit training. This makes sure that we can train and that all of the trainable layers are set appropriately and that any frozen layers are also set appropriately. And then we're going to get our PEFT model and our PEFT model is going to uh give us those laura layers now you'll notice that we have only 2.7 million trainable parameters out of a possible many billion trainable parameters right and the key thing about q the q and q laura right is well is great, when we make each of these parameters one eighth the size, right, we're effectively reducing this by another factor of about eight. It's not strictly eight because of the fact that it doesn't interact with all layers, but the idea is it's about eight another factor of eight reduction in the uh in the total size of parameters that we have to train which is insane right it's uh we we went from kind of we're already at a fraction of a percentage and then we even further reduce uh the amount of actual uh work that we have to do, which is great. And then we can see here that our LoRa layers are also 4-bit, right? We have our LoRa layers are 4-bit as well as our actual, you know, regular layers that were converted to 4-bit. After that, we're going to load some data. We're just going to grab the Apaka GPT-4 data. We're going to do this Uno reverse card train, just a fun one. It's kind of like the classic now. I think this is what you're going to see. Whenever you do an instruction tune, it's just fun and it really proves the point that the process works. So we're going to ask the model to take a input and then generate an instruction. So we're going to create a model that's good at generating instructions. We're going to use this generate prompt helper function in order to create these prompts that our model will be trained on. And then we're going to set up our trainer. Our trainer, this is all boilerplate. The other big insight from the QLora paper is this paged Atom W32 bit optimizer. I'm not going to go too into it here, but the idea is that this idea of using paged memory is really, really effective, and it helps us train very stably and very efficiently with very little cost to us other than we have to flag it. The rest of this is all boilerplate, right? It's good boilerplate, but it is boilerplate. And we are going to make sure that we have our BF16 equals true, which is going to make sure that our compute D type is compatible when we upcast, which is necessary. It says CUDA, but would a Mac suffice to fine tune the model to the 4-bit? I would recommend a GPU, a NVIDIA GPU for sure. The kernels are written for it. I believe you can use 4-bit on other devices, but it's not necessarily going to be as efficient or as fast. The optimization of the kernel really added some speed to this process but i'll get back to you uh more about that uh after a little bit of digging to make sure that you can do this on mac even if it is going to be slightly less efficient uh we're going to use the trl trainer the sft trainer from trl in order to train our, our max sequence length of 2048 just for Mistral itself, and then we can train this using trainer.train. At the end of the day, we reload our model, just a quirk of path. We reload it. We make sure we load it in 4-bit, and then we have our torch D type for float 16. That's the compute D type again. And then we are going to, you know, look at the model. So we say in instruction, identify the odd one out among Twitter, Instagram, and Telegram. That's great. That is, that's an instruction that would result in this, in this, you know, in this kind of the odd one out is Telegram response. And you can see the ground truth is identify the odd one out is telegram response. And you can see the ground truth is identify the odd one out. And if we look at the base model, we can see that the base model's instruction is much less good. It does not even mention telegram. And so, not a very good instruction. But that is it for me and the code demo. So with that, I will pass you back to Greg who will wrap us up. So with that, I will pass you back to Greg. We'll wrap us up. Yeah, thanks, Chris. That was awesome as usual and love that deep dive explanation on exactly what's happening within the quantization method in the QLORA paper. So today we saw building off this PEFT-LORA approach, Today, we saw building off this PEFT-LORA approach, that PEFT-qLORA fine tuning is really about modifying behavior by updating fewer quantized parameters using factorized matrices. So this idea of using fewer parameters and of using the LoRa factorized matrix approach. This gets us from 3.8 billion down to 2.7 million parameters, less than 1%. And then we come in with quantization. This is technically blockwise k-bit quantization, effectively just allowing us to express more information with less. And the key to the QLoRa method is that from that 2.7 million parameter level we're coming in and we're starting to actually quantize that down to four bit before we we begin training during training we will de-quantize when we have to do computations and before re-quantizing to continue the training process. Next week, we are going to be covering how to not fine-tuning and loading, but now serving an inference with VLLM. So we hope you can join us for that one. But for today, we're going to go ahead and get started with the Q&A period. I'd love to invite Chris back up to the stage. And if you guys have questions, it looks like Manny is crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, throw it in the Slido. We'll also try to get to your questions if you throw them in the YouTube live chat. But Chris, let's go ahead and jump right into it here. First question. Is the reason we don't get inference latency benefit with QLORA because model weights are re model weights are retained as 32 bit during inference. I mean, I, yeah, I mean the question, uh, to be more specific about, uh, the phrasing, I think we could say that the, the model weights are de-quantized to a higher precision during inference. So yes, that is why we don't see a benefit to inference. In fact, we see a penalty. It's not a big penalty, but there is a penalty. And so, but yes, that's exactly why. Oh, okay. Nice, nice. Yeah, excellent question. Astute one there. And then first one from Manny here. When we're talking about parameters, are we referring to additional features such as Xs in the equation, Y equals predict X1, X2, Xn? Are X1 to Xn considered parameters? What are we talking about when we say parameters? Yeah, parameters, features, it's all numbers, weights. I mean, we have so many different names for similar kinds of objects. I would think of parameters more specifically as the entities that fill up these weight matrices that we use to compute when we're actually doing that matrix multiplication. But yes, I mean, essentially a parameter, a parameter is any node in the, in the model architecture, right? So this is not something that you're going to want to use with like your XG boosts or your, you know, your kind of traditional ML methods, right? It's not like a random floor forest applicable, you know, technique. It's specific to that deep neural architecture. And it's also specific right now to that transformer architecture, though there's no reason it needs to be. It is most explored in that space. Hopefully that answers the question, Manny. Yeah, yeah. Well, we'll kind of flow through some of these other questions and pop back to Manny's questions as well. I think this one's super relevant to everybody. If I don't have a powerful laptop, where can I practice these techniques? Honey, P, it's Colab. Get yourself into Colab. Colab makes it so easy. And the whole benefit of this kind of thing is we can load these very large models with very little resource. And so oftentimes, you can load like a 3 billion or 6 billion parameter model, you can load that in a free instance of Colab right using the free free tier GPU, the T four. So it's I think that's a great way to start if you don't have a powerful laptop. As you get more embroiled in the space, you might look at other cloud hosting solutions, Lambda or AWS, whatever you want. But for the getting started beginner, I would say Colab is your best friend. If you want to, you can pay for compute so you can pay to get a little bit more uh beefy gpus but uh stick to the free tier and and stick with your kind of three to six billion parameter models and you're gonna have a great time yeah yeah yeah yeah stick to the three to six bill quantize quantize quantize quantize and uh and then colab like we we teach entire courses in collab and we do a ton of fine tuning throughout so you know just try to be as efficient as possible don't sit there and do tuning for you know days and days at a time if that's not really something that you're interested in you know use small, try to make the model as small as possible through picking the small size of Hugging Face and then quantization for sure. But yeah, there should be nothing stopping you if you're a beginner. You don't have to get AWS. You don't have to get all these things. Okay, Islam, we got a question that's getting upvoted here. Can I do this fine tuning with Lama CPP? And is this fine tuning possible to plug into the end-to-end fine tuning within a RAG framework? So E2E fine tuning within RAG framework, yes, 100%. The RCAI, we've done an event with them. Their DOM framework and GitHub, we'll get a link for you guys to drop into the chat. That is 100% a great tool that does leverage or can leverage LoRa as well as quantized methods. In terms of Lama CPP, I'd have to double check. I don't know off the top of my head, but I will double check and then we can include that information in a comment if I'm unable to find it before the end of our time together today. Okay. All right. Back to Mandy's next question. We say weights and biases when we talk about ML models or neural network models. So if weights are parameters, are we saying weights and biases that are parameters in the LLM world are weights and biases parameters? Let me think through this question. world are weights and biases parameters? Let me think through this question. We say weights and biases when we talk about LLM. So if weights are parameters, are we saying weights and biases parameters? Like our bias is also parameters? I guess is that the question? No. But yes. I mean, I mean, at the end of the day, the thing we care about is the weights. That's, that's, that's, that's all answer this question. We want to update the weights, aka the parameters. Okay. All right. Good stuff. Then I'm gonna go ahead. Last manny question here. Can you speak about examples of LoRa adapters? Like, what are they? And what are they created for? a tool perspective. So let's say we create a LoRa adapter that's very good at translating natural language to SQL. And then we create another LoRa adapter. And that LoRa adapter has been fine tuned to translate natural language to Python. Then we create another adapter and you you see you can kind of go on that the idea is that whenever we do inference we can choose whichever of those adapters or those laura layers to flow information through that's going to make our output consistent with what we fine-tuned it to do so you can you can think of them as little hats you can put on your model that's going to change its behavior, but it doesn't touch the, it doesn't modify or it doesn't have to modify the base model at all. Just kind of this hat that sits on top of it, but gets it to do a different job. And the idea is that we can choose those hats as we want, even at time of inference, we can choose which hat we want it to wear. Yeah. Yeah. And I mean, you know, this is like the thing for businesses too. It's like, if you think about these adapters, man, it's like they're plug and play. And so if you want the LLM to do something super specific, that prompt engineering has only gotten you so far and you just can't get what you need exactly to get in and out in specific ways with your customer or your user. If you want to really constrain what your user can put in, you want to really constrain what comes out, this fine-tuning piece, this lore adapter piece is going to be like your friend. You know, we had a great meme that we posted on LinkedIn recently where it's sort of like if you're doing fine tuning, you're kind of doing LoRa. So it's sort of like this is a big question. You know, examples of LoRa adapters would be like anything that you fine tuned, you know, you might say and. OK, we've got a couple of minutes left. I'd like to shout out out to you know thanks for the great note just want to appreciate your efforts uh appreciate a lot it looks like we've got uh george i think he's struggling with a specific error maybe we can comment on that after the the event he's he's put his error into slido as well um i guess uh last question this is a big question. So you can take maybe two minutes, Chris, what are the trade-offs of using dimensional reduction techniques like LoRa, QLoRa, PEFT on LLMs in terms of training, inference, fine tuning? Like when you think of trade-offs, maybe best practices here, what do you think of? I mean, the big one is quality or like how good the output is uh there is a trade-off there it's really small and beyond being really small it's really small like so okay this is this is the way i think about trade-offs when it comes to laura and and the crew uh i can i can find you the laura model right to be let's say like 98% as effective as full fine tuning, right? But I can do that in a 10th of the time with a thousandth of the resources, right? So divide by a thousand, the number of resources. I mean, that is a trade-off. There is a trade, you're losing 2%. But like, it doesn't feel like a real trade off. And especially in terms of business value. It's not like a, it's not a real trade off these days, like, especially if you use a high enough R or rank in your your Laura, so you're using that kind of 128 are, you're still getting a massive reduction in compute but you're retaining so much of the performance that it it truly doesn't feel like a trade-off it there is a trade-off to be clear there is always technically going to be a trade-off but it lets you do things you wouldn't be able to do so it doesn't feel like a trade-off i I mean, for small companies, you can fine tune a model that does a whole new novel thing that fuels your business, that is your business, right? That you just couldn't do if you didn't use these methods. In that case, there is no trade-off, right? It's enabling you to do something that was previously impossible to you. That's only advantage. When it comes to inference specifically, possible to you that's only advantage uh when it comes to inference specifically both uh the the Q Laura or any quantized uh method using bits and bytes and Laura if you're talking about non-merged Laura adapters do impart a small inference latency penalty it is. At scale, it can maybe be felt, right? If you're really getting to those hundreds of thousands of requests per second compared to a very efficient model, you might want to re-quantize that to another format and serve that model directly instead of having it part of your LoRa stack. But again, these are problems that come with scale and that scale kind of also helps you fund the solution. But outside of that, you're not going to feel these issues until you're into the six figures or more requests per second for your kind of LLM stack. So I would say there are trade-offs, but when you're getting started, they really don't appear as trade-offs. All right. Yeah. Okay. So use PEFTQ, Laura, unless you got a million requests per second. Sounds like a plan, dude. All right. Cool. Let's go ahead and wrap it up. Thanks, Chris. And can't wait till next time. Thanks, everybody, for joining us today. Again, next week, we'll be back talking inference and serving and how to do it efficiently with VLLM, one of the hottest open source tools out there for doing that. We'll tell you a little bit about the tool and its background. If you like this session, you might also really like cohort four of LLM Ops, LLMs, and Production launching February 13th. In that course, which we're going to be soon announcing an expanded curriculum for, you'll learn to prototype and scale production LLM systems, including using RAG techniques, including fine tuning, and so much more. Check it out in the link. And then lastly, please share any feedback you have on today. You can drop it in the chat or you can drop it in the feedback form. That will drop to you now. And that's it for today. Until next time, keep building, shipping, and sharing, and you know we'll be doing the same thing. See y'all next week.", "datetime": "2024-06-09T20:04:44.501496"}
|
train/transcriptions-b69a3458-f38f-4a31-84e8-ea62708adce8.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/live/XOb-djcw6hs", "transcription": " Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? It sure is, Greg. Yes. And is quantization like really as good and as dope as everybody's talking about? Yes. Emphatically, yes. Emphatically, yes. Man, I cannot wait to see exactly what's going on inside. You're going to show us how to do this today, right? Sure. All right. Let's go ahead and get right into it, man. We'll see you back in just a little bit. Today, we're going to talk quantization. I'm Greg. That's Chris. We're from AI Makerspace. This is a bit of an add on to last week's event, which talked about parameter efficient fine tuning and low rank adaptation. Today, we're gonna take it to the next level and talk quantization. We'll demystify the idea of quantization, and we will also talk about how to leverage the latest in low ink adaptation which is a quantized version of it called QLORA as always we'll be collecting questions with slido so go ahead and provide your questions for us throughout the day at that link and then we'll go ahead and answer as many as we can when we're through with the demo at the end. Of course, we'll have Chris back to lead and wizard his way through the demo on quantization soon, but for now, let's cover what we need to know so that's going to make sense to us. We're going to talk quantization of LLMs today, and we're going to talk fine-tuning with LoRa. This is the main goal. We want to understand and we want to align our aim to really grokking QLoRa and then seeing how we can implement that. We got a little bit of insight into quantization last time when we were loading the model but now we want to take a look at how it can be used to fine tune and some of the background and intuition associated with why this works and what the industry has sort of learned about the precision of numbers within our llms so we're going to talk-tuning quantization QLORA, and then we'll do it. And to sort of contextualize this, similar to last time, we wanna understand that often fine-tuning is coming into play after we do prompt engineering, often after we set up a retrieval augmented generation system. And we wanna now take a look at how we can optimize our large language model, or in other words, how we want the model to act, how we want the input and output schema of the model to be a little bit more constrained, a little bit more dialed in, a little bit less large, a little bit more small. And this is sort of the trend we're noticing as 2024 is upon us now. We are seeing a bigger and bigger interest in smaller, more performant language models and fine tuning is really a key aspect that's going to help us to get there so let's just remind ourselves what we talk about when we talk about fine tuning with peft laura PEFT LoRa. And why we need to do this. You know, when we talk LLMs, they're super big. They have billions and tens of billions of parameters. It's likely we'll see models with hundreds of billions of parameters before too long. Not all models are always getting bigger, but some of them are. And the reason is, is because if we keep adding more text and more parameters, we are pretty confident that our next word prediction will continue to improve. prediction will continue to improve. But as we do this, as we build larger and larger models, as we have to deal with more and more compute in order to be able to handle them, whether that's loading them, training them, fine tuning them, or performing inference on them and serving them. We're kind of abstracting away from the regular developer, the regular individual out there that doesn't have access to a giant cluster of GPUs to be able to even play with these things. And this is the core problem, is that when we go and we want to do full fine-tuning on many, many billions of parameters, this becomes a huge pain for anybody trying to use consumer hardware, any small business trying to just use the laptops that they have, maybe a few resources on the cloud. And this is as true for fine tuning as it is for loading and storing, certainly for deploying these models. It just costs too much. And the solution for kind of dealing with the fine tuning, the storing and the deploying is kind of the same. But today we're focusing on fine tuning. Today we're focusing on fine tuning using fewer parameters. It's all about using fewer parameters. We don't need all of them as we started to get some intuition into last time. And in fact, the ones that we have, what we're going to do today is we're going to take those parameters and we're going to make them smaller in a sense. We're going to make them smaller in a computational sense. This is the essence of quantization. So while it may not be necessarily fewer parameters when we talk about quantization, although it often is when we talk about fine-tuning, we're just trying to move these big, big, big models towards smaller packages through fewer parameters and through more efficient representation of those parameters. And we saw last time, we saw that LoRa is the number one PEF method you should know. It's called low-rank adaptation. And the big idea of LoRa, as we discussed, was to fine-tune using factorized matrices. And again, we didn't focus on fine-tuning absolutely everything. We did fewer parameters. That was great because it was more efficient. And we found out that we could actually leverage LoRa adapters for many tasks. So you could have one big, big model and a ton of different lower adapters and deploy that to production. Deploy each of those adapters to production because at inference is when the adapter would actually come into play. So very, very flexible, very good technique for. Larger companies and industry, especially that want to just have many adapters and larger companies and industry, especially that want to just have many adapters in one very powerful model, we'll probably start to see this emerge as an approach to AI development in the enterprise. And, you know, it's really comparable to fine tuning, full fine tuning. full fine-tuning. So, you know, we saw, in essence, that fine-tuning is all about modifying behavior of LLMs to update parameters. Parameter-efficient fine-tuning is all about fine-tuning with fewer parameters. Low-rank adaptation was all about fine-tuning using factorized matrices. And so parameter-efficient fine-tuning through low-rank adaptation is all about modifying behavior by updating fewer parameters using factorized matrices. So this all sort of flows together. This leads us directly to our new friend, quantization. And this meme is so good, I had to put it twice, because it's such an oft misunderstood idea. Certainly has taken a long time for me personally to really try to grok this thing. So let's see if we can break it down in a way that makes sense to all of you. First off, the weights of our LLM, when we talk about weights, it's the same thing as when we talk about parameters. Okay. So parameters, I might say weights, we're still talking about parameters. Those parameters are simply numbers. They're just numbers. And specifically, they're floating point numbers, They're floating point numbers, also known as floats. And it's important to understand a little bit of the detail here, because this is the essence of what we're doing in quantization. When we talk about floats, you may harken back to your days in school, maybe chemistry, back to your days in school, maybe chemistry, where you learned about significant figures, sig figs, everybody's favorite, right? And then if you're like me, you become an engineer and you don't care anymore, ever again. But I was a mechanical engineer. If you're a computer scientist, computer engineer, maybe you continue to go deeper. And these days in AI, if you're a developer, you need to continue to go a little deeper. Because this idea of a float is cool, this integer with a fixed precision, we can talk about representing, for instance, 12.345 as 1, 2, 3, 4, 5 times 10 to the minus 3. And we can then do this by using a specific number of bits in our computer. When we talk about this precision, this fixed precision, there's a number of different types of precision. What we're going to generally be using is what's called full precision when we're doing computations that are kind of default computations. Full precision means that I have 32 bits to represent my floating point number. And they're broken up into a couple different pieces here, but the big idea is that there's 32 bits. And the question is, is that the right amount when we want to go and deal with 70 billion parameter models and things like that? And it turns out that in machine learning, we found sort of over time through experiments that if we didn't use 32-bit precision and instead we used 16-bit precision, Instead, we used 16-bit precision, essentially half precision, to again, simply represent those decimal numbers that are inside of each of the neural network, that represent each of the neural network weights, sort of each of the neural network perceptrons is a way you could think about this. Then what we're finding is that we can get almost identical inference outcomes from our LLM. Because remember, we just want the words that come out at the end. We just want the ideas that come out of that. We just want the outputs. We don't necessarily care about the precision of the stuff within the black box. We put in, we get out. And a lot of people were seeing this. A lot of researchers were seeing this with the large language models, that if we just leveraged half precision we can get very very good outcomes and what this does is this effectively halves the entire model size so what are we saying we're saying that we can sort of get exactly the same thing coming out coming out, even if we represent each of the model weights using half as much information we can think about. Because really, I mean, how many sig figs do we need? And another way we can talk about moving from a 32-bit down to a 16-bit representation is we can say we are quantizing. We quantize the 32-bit weights down to 16-bit. weights down to 16 bit. Hence quantization. Now, when it comes to quantization, there are many different approaches to quantize model weights. So, this is very important. We're not going to cover every single approach because that's not really necessary for what we want to discuss today. But there are many different ways to quantize model weights, and we hope to continue to bring you more content on ways that are a little bit different in terms of their actual implementation and the nuances in future content but for today we're going to focus and use this Q-Laura idea as a focusing lens now Q-Laura starts the story begins with a paper called 8-Bit Optimizers via Blockwise Quantization. And this was a paper that came out of the University of Washington. Tim Detmers was the lead author, and he's been quite a superstar in the field.'s he's kind of like the quantization guy and in this paper they showed that you can use 8-bit representations and maintain performance that we're seeing at a level of full precision or 32-bit. So here we see in this kind of early paper, again, one of these pieces of work where they're saying, hey, look, experimentally, we're seeing that if we reduce the precision, we can still get great results. And this is not reducing it to half precision it's reducing it to quarter precision 32 down to eight and this bits and bytes approach this bits and bytes paper turned into what became the bits and bytes library which has since evolved and is something that we'll see the Bits and Bytes library, which has since evolved and is something that we'll see Chris use today, and it's something that gets used all the time now. Now, Bits and Bytes, you can go ahead and recall that one byte is equal to eight bits. We're going to continue the discussion in bits today, but you'll see many papers and discussions of things that will talk in bytes as well. So pretty simple to understand why the library was named bits and bytes. Now, again, this is one approach. And so there are some trade-offs as there are with any approach. For instance, when we use the bits and bytes approach to quantization, we're not really getting any additional benefits to our inference latency. We're not really speeding up inference a whole lot by using this particular approach to quantization. However, what we are doing is we're leveraging a tool that gives us very flexible use of those LoRa adapters, right? So for enterprise, if we're thinking about how do I have one big model and just have a bunch of adapters, this is going to be our friend. And this is why we choose to focus on this one today. And this bits and bytes library forms the basis for what comes next. It kind of forms the basis for this QLORA idea, this efficient fine-tuning using quantization. And the fine-tuning using quantization from the QLORA paper, the big bang box takeaway of this is it's super great, even though it's eight times less precise. less precise. So what we actually have going on in QLORA is we have not an 8-bit representation, but we have a 4-bit representation. And so what completely insane. And we can fit all of that on a single 48 gig GPU, single 48 gig gpu which is like just kind of incredible it's just kind of it's kind of mind-blowing that we can do this and so this q laura paper is essentially coming and saying hey hey, listen, we've got this idea that we can do fine-tuning using a four-bit approach versus even a half-precision approach, and we get amazing results. And so this is the essence of what's going on here with QLORA. the essence of what's going on here with QLORA. And so what we can kind of think about is if we go back to this idea of PEPF-DLORA fine-tuning, where we're modifying behavior by updating fewer parameters using factorized matrices. And we add this idea of quantization, where quantization is simply representing high precision numbers with low precision. Then we get to this place where we talk about PEFT-QLORA fine-tuning, where we talk about PEFT QLORA fine-tuning, where we're modifying behavior by updating fewer quantized parameters using factorized matrices. And so the process as outlined in the QLORA paper and the process that you're going to see today is something like this. We download the model weights. Anytime you download model weights from Hugging Face, they're always going to be in full precision, 32-bit. Then we load our parameter efficient fine-tuning model into GPU memory. Anytime we load into GPU memory for inference or training, we're going to be loading using that parameter efficient fine tuning method. And then we'll initialize our low rank adaptation, our LoRa configuration. And finally, and this is the key, this is the key to the whole thing, is that during training, what happens is we have the full precision 32-bit model, and we're going to actually load the 4-bit model, quantize 32-bit down to 4-bit, for training. Quantize 32-bit down to 4-bit for training. Now, during training, we're going to flow through the network, and we're going to, as necessary, each time we have to do a computation, each time we have to calculate something during our training process, we're going to de-quantize that 4-bit representation back up to a 16-bit half-precision representation. We're going to do the calculation, and then we're going to re-quantize back down. And at each step of our training or fine-tuning, we're going to quantize, de-quantize, move on. So we're never holding that half precision fully in our GPU memory. But rather, we're simply using half precision to do the calculations. This is the magic of what's really going on behind the scenes. And it turns out this works incredibly well. And again, that intuition behind the 16-bit piece is that we saw that for inference, you can go from 32- down to 16 bit and get very very good results we saw this experimentally over a lot of time not just papers from the university of washington but also papers from many other researchers and this q laura approach fundamentally Fundamentally, is to load those full precision weights into GPU memory as quantized 4-bit weights. And then only de-quantize up to 16-bit during calculation. Back down as it moves through. All right. So this is the core approach that we're going to see today. You're going to see things like this. This is the bits and bytes configuration. And you'll notice when we want to load in, we want to load in in 4-bit. You're also going to see a data type called NF4. Chris is going to talk a little bit more about it. It's very important. It's very essential to the QLOR approach. And that's it for the big ideas we need to really see how this build can be taken to the next level. So what we wanna do is we wanna take the same build that we've already looked at, the old UNO reverse card build, given the response, predict the instruction. We want to use the same model that we saw last week because it's still one of the best out there. Mistral 7B instruct V0.2. And we're going to use the same data for fine tuning. Just keep everything simple. That Alpaca GPT-4 data set is there. So again, output response, predict input instruction. And with that, we're ready to kick it back over to Chris, the wizard, to show us how to do fine tuning with PATHQ, Laura, and fill in some additional details. Wiz, over to you, man. Q Laura and fill in some additional details. Wiz over to you, man. Oh yeah, thanks Greg. Really appreciate it. And guys, I'm excited because quantization is definitely one of my favorite topics. It is the kind of like one of the best things we could do right now. And as you can see, we only used around 20 gigabytes of GPU RAM to train this 7 billion parameter model, which is quite impressive in my lens. That includes fine tuning. In any case, we'll get right into it. First of all, we're going to be using Mistral 7B Instruct V02. This is just Mistral's most recent Instruct tune model. I love it. And we're going to now move on from PEFT, which we discussed last week, into the Q in QLORA. So we discovered or we discussed, you know, the idea of how we can reduce the number of parameters that we train. But now how do we reduce the size of the parameters that we train? Now, first of all, what is quantization? Greg already talked us through it. I'm going to give a brief overview here of what's happening under the hood, and then we'll get into how to implement it in code. Spoiler alert, it's super easy. Thanks, bits and bytes. But let's look at what quantization is from this perspective. So quantization is a process of discretizing an input from a representation that holds more information to represent a representation with less information right that's crazy so the idea is we want to express more information with less information so how do we actually do that well in the tim detmer's q laura paper they rely on this process called blockwise k-bit quantization which which sounds, you know, like, very, you know, scary, but it's not so bad. It relies on two very important things. One, it relies on the fact that in neural networks, the model weights are mostly normally distributed. So as soon as we, if you're, if you're coming from a stats background, as soon as you hear that word normal distribution you you know your your eyes should light up uh you know we're we're going to be able to make use of a lot of very clever tricks uh to help us do whatever we're trying to do um and then it also relies on this idea of the nf4 format which which is a number format or data type created by Tim Detmers and team, which is information theoretically optimal. Now, not literally, it was proven this is not literally true, but it is empirically, for all intents and purposes, this is a fact that NF4 is very, very efficient, which is excellent. So how does this work behind the scenes, right? So, okay, we get it. Model weights are normally distributed. That's great. So what we're going to do is we're going to essentially put a pin in the number line that is near to the mean, right, of our desired numbers, which are going to be in a distribution. And that distribution is going to be normal, right? And then we're going to kind of use that mean as a zero point. And we're going to use this NF4 data type, which is a zero centered number format to represent the numbers that appear around that specific point in the number line. So there's a step that needs to take place here. We're going to normalize all of our numbers to be within a specific range of minus one to one. And then we're going to be able to have this idea of a saved place on our number line that we're going to understand a range around. And that's really about it. Now, it's a bit simplified and it's definitely, you know, you can look at the paper for the math. It's great. But the idea is that we have, we kind of drop a pin in the number line and we have this NF4 number format, which represents a range around that point to the number line. And that is what's going to build up the buckets or bins that we're going to use to represent our numbers. And the reason this works so well is again, because of the fact that model weights are normally distributed and because this is an informationally, theoretically optimal data type for that minus one to one range. So this is specific Yennefors for that minus one to one range for normally distributed, to one range. So this is specific, the n of four is for that minus one to one range for normally distributed, well, distribution. So that means the only reason this works is because of this first fact, right? Now, beyond just that, QLORA does an extra step. So you might have thought to yourself when I said drop a pin in the number line, right? Well, okay, if we drop a pin in the number line, that's all well and good, but doesn't that mean that we have kind of like a high precision number, right? It doesn't have to be as high precision perhaps, but it's definitely still high precision. And that's true, right? That pin we drop is high precision. Well, it can be used to represent many numbers. In this case, you know, 64 numbers from the QLORA paper. So each pin is associated with 64 numbers. Tim Demers and crew said that's not enough. You know, that's going to give us 0.5 bits per parameter of overhead, right? So we need to go bigger. So what they did is they actually took all of those quantization constants. That's the technical term for that pin that we're dropping, right? We take those quantization constants, and then we also quantize those. So we represent our quantization constants in an 8-bit format, and we do 256 of those for every 32-bit precision number. So we have one 32-bit precision quantization constant that sits on top of 256 8-bit quantization constants, which sits on top of each of those sits on top of 256 8-bit quantization constants, which sits on top of, each of those sits on top of 64 4-bit. So you can see the savings in terms of memory here is insane, right? We're able to represent so much of our data in that 4-bit representation. And we're also able to do it in a way that retains a ton of information. And that is key. I saw some questions in the YouTube chat kind of concerning, you know, what's the trade-offs here? What's the performance gains? And there definitely is some when it comes to latency. We'll discuss those as we move through the rest of the notebook. But in terms of the actual effectiveness of the model, the performance hit can be very small. It is not zero. There is a performance hit, but it's incredibly small, which makes this a very effective technique, especially when applied in the way we're going to see it applied today. So that's basically what we're talking about when we talk about this idea of QLora, right? We're talking about dropping a pin on the number line and then saving kind of numbers or representing numbers around that and then doing that one more step abstracted which is harder to visualize but there it is okay so how do we do it in code now right uh well first of all we gotta load our our kind of familiar usual suspects here so we're bits and bytes data sets accelerate uh the laura lib library transformers and peft uh these are all kind of staple libraries we're bits and bytes data sets accelerate the Laura lib library Transformers and peft these are all kind of staple libraries we're going to be using when we're using these uh kind of Q Laura tools and then we're going to grab our model and the model we're going to grab is the Mistral AI Mistral 7B instruct v 0.2 it's the most recent uh instruct model for Mistral it's a great one and then this is kind of uh you know where the magic happens this is the bits and bytes config uh this is from the bits and bytes library we're gonna see that we load in four bit so this means when we actually move our model from those saved weights uh that exist on our on our drive, when we load those into our GPU, we're going to load them in that four-bit quantized state, right? So that's that collection of numbers and then their quantization constants and then their quantization constants because we're using this use double quant, right? If we omitted that use double quant, we would only do one step, and then we would be saving less effective memory. We're also going to be using the quant type of that NF4 I talked about. That's the Tim Detmers and crew created number type, which is information theoretically optimal. Again, not literally true, but it's close enough, so we'll keep saying it. And then we're going to have this idea of a compute D type, which is going to be torch B float 16. Now this is very important, right? So when we store numbers in 4-bit, that's awesome. But when we try to compute with them, it's really bad. It's actually quite bad, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? We usually wind up with a number that is relatively needs more precision to fully accurately understand it, right? When we divide 100 by 1000, we wind up with a very, you know, a small number. And the idea is that we'll need more precision to represent that very small number. So what we do with the QLORA approach is we actually de-quantize whatever we need to compute with our weights. Now, this is done at a per-tensor level. So we never have the full model de quantized in memory, just one tensor at a time, right? So this saves us a ton of a ton of space. And it also lets us have the ability of computing as if we have this model in that higher precision or B float 16 format, right? Which is huge. So we're saving tons of space and then we're de-quantizing. So we also retain some of that compute precision. And that is what lets this method really shine, right? The fact that we de-quantize for computation and then we store in 4-bit. I think without that, this would be a less powerful method. But with that, it's amazing. You can choose up to full precision here. Obviously, that is going to come with some small memory overhead. You do have to upcast a tensor to the full precision, but it's negligible compared to the size of the model. And it does also, and this is critical, it does come with some inference and training latency overhead, right? The fact that we have to de-quantize and re-quantize, de-quantize and re-quantize, this means that we're performing an additional operation per computation. And so that is going to impact inference. Now, Tim and team have written some great kernels for this. So it's not very slow, but it is going to be slower than if we weren't doing that extra operation. And so this is one of the key trade-offs, right? We had questions about trade-offs. One of the key trade tradeoffs with Qlora and with the bits and bytes approach is that it is extraordinarily flexible. It is very powerful and it works very well with a PEFT adapter methods. So like LoRa and others, but it does cost us a little bit of inference latency in training time. So that's important to keep in mind. Once we have our bits and bytes config loaded, all we have to do now is just load our model like we normally would. So auto model for causal LM from pre-trained. We're gonna pass in our mistral AI model. We're gonna pass in our quantization config. We're not gonna need the cache and we're gonna map this to auto, which is gonna shove as much as it can into our GPU. In this case, again, because the actual model loaded only takes up about 15 gigabytes of GPU memory, it's all squeezed into the GPU there. So that's great. We do some pre-processing on our tokenizer to make sure that it's set up in the right format for training. And then we can look at our model architecture. You'll notice that we have this four-bit layer, right? This four-bit layer is where that bits and bytes comes in. You'll see that we have the four-bit layer on our QKVO proj as well as our MLP. So it's all four bit, all the way down. This is the idea, right? We don't want to just quantize some of the model. We're gonna quantize as much of it as we can. However, you will notice that we omit some of the layers, specifically we omit our layer norms. And the reason we omit our layer norms is we know that our layer norms. And the reason we omit our layer norms is we know that our layer norms are going to tend to a very, very small number, you know, near zero. And we're going to run into some training instability issues if we use lower precision to represent these layers. So we're actually going to keep those in full precision. Now they're very small compared to their weight matrix counterparts, but we do want to make sure that we're keeping those layer norms in a higher precision. This is to avoid training instability issues, right? If we have these numbers kind of diverge and cause a ruckus, right? We're not going to be able to train very well. And so that's why we don't see those four-bit layers here. Now that we have our model loaded, we can see that it's in four-bit. We're very happy about that. It's time to peftify it. We talked about peft last week, so we're not going to spend too much time on it today, but the idea is fairly straightforward. We are going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to set our alpha, which should be by conventional wisdom, about twice your rank. Though you're, you know, again, it's always worth doing hyperparameter searches here to make sure you have the most optimal hyperparameters. Your LoRa dropout, pretty consistent value. Your bias is none. Task type is causal, because that's what we're doing. You'll also notice that we have our QVK proj modules. We, again, with QLoRa, we want to target as many modules as we can, right? The QLoRa paper's wisdom is that we should actually target all possible layers of LoRa. In this case, we're just going to leave it up to PEFT to simplify things a bit for us. For our base model, all we have to do is prepare our model for k-bit training. This makes sure that we can train and that all of the trainable layers are set appropriately and that any frozen layers are also set appropriately. And then we're going to get our PEFT model and our PEFT model is going to uh give us those laura layers now you'll notice that we have only 2.7 million trainable parameters out of a possible many billion trainable parameters right and the key thing about q the q and q laura right is well is great, when we make each of these parameters one eighth the size, right, we're effectively reducing this by another factor of about eight. It's not strictly eight because of the fact that it doesn't interact with all layers, but the idea is it's about eight another factor of eight reduction in the uh in the total size of parameters that we have to train which is insane right it's uh we we went from kind of we're already at a fraction of a percentage and then we even further reduce uh the amount of actual uh work that we have to do, which is great. And then we can see here that our LoRa layers are also 4-bit, right? We have our LoRa layers are 4-bit as well as our actual, you know, regular layers that were converted to 4-bit. After that, we're going to load some data. We're just going to grab the Apaka GPT-4 data. We're going to do this Uno reverse card train, just a fun one. It's kind of like the classic now. I think this is what you're going to see. Whenever you do an instruction tune, it's just fun and it really proves the point that the process works. So we're going to ask the model to take a input and then generate an instruction. So we're going to create a model that's good at generating instructions. We're going to use this generate prompt helper function in order to create these prompts that our model will be trained on. And then we're going to set up our trainer. Our trainer, this is all boilerplate. The other big insight from the QLora paper is this paged Atom W32 bit optimizer. I'm not going to go too into it here, but the idea is that this idea of using paged memory is really, really effective, and it helps us train very stably and very efficiently with very little cost to us other than we have to flag it. The rest of this is all boilerplate, right? It's good boilerplate, but it is boilerplate. And we are going to make sure that we have our BF16 equals true, which is going to make sure that our compute D type is compatible when we upcast, which is necessary. It says CUDA, but would a Mac suffice to fine tune the model to the 4-bit? I would recommend a GPU, a NVIDIA GPU for sure. The kernels are written for it. I believe you can use 4-bit on other devices, but it's not necessarily going to be as efficient or as fast. The optimization of the kernel really added some speed to this process but i'll get back to you uh more about that uh after a little bit of digging to make sure that you can do this on mac even if it is going to be slightly less efficient uh we're going to use the trl trainer the sft trainer from trl in order to train our, our max sequence length of 2048 just for Mistral itself, and then we can train this using trainer.train. At the end of the day, we reload our model, just a quirk of path. We reload it. We make sure we load it in 4-bit, and then we have our torch D type for float 16. That's the compute D type again. And then we are going to, you know, look at the model. So we say in instruction, identify the odd one out among Twitter, Instagram, and Telegram. That's great. That is, that's an instruction that would result in this, in this, you know, in this kind of the odd one out is Telegram response. And you can see the ground truth is identify the odd one out is telegram response. And you can see the ground truth is identify the odd one out. And if we look at the base model, we can see that the base model's instruction is much less good. It does not even mention telegram. And so, not a very good instruction. But that is it for me and the code demo. So with that, I will pass you back to Greg who will wrap us up. So with that, I will pass you back to Greg. We'll wrap us up. Yeah, thanks, Chris. That was awesome as usual and love that deep dive explanation on exactly what's happening within the quantization method in the QLORA paper. So today we saw building off this PEFT-LORA approach, Today, we saw building off this PEFT-LORA approach, that PEFT-qLORA fine tuning is really about modifying behavior by updating fewer quantized parameters using factorized matrices. So this idea of using fewer parameters and of using the LoRa factorized matrix approach. This gets us from 3.8 billion down to 2.7 million parameters, less than 1%. And then we come in with quantization. This is technically blockwise k-bit quantization, effectively just allowing us to express more information with less. And the key to the QLoRa method is that from that 2.7 million parameter level we're coming in and we're starting to actually quantize that down to four bit before we we begin training during training we will de-quantize when we have to do computations and before re-quantizing to continue the training process. Next week, we are going to be covering how to not fine-tuning and loading, but now serving an inference with VLLM. So we hope you can join us for that one. But for today, we're going to go ahead and get started with the Q&A period. I'd love to invite Chris back up to the stage. And if you guys have questions, it looks like Manny is crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, throw it in the Slido. We'll also try to get to your questions if you throw them in the YouTube live chat. But Chris, let's go ahead and jump right into it here. First question. Is the reason we don't get inference latency benefit with QLORA because model weights are re model weights are retained as 32 bit during inference. I mean, I, yeah, I mean the question, uh, to be more specific about, uh, the phrasing, I think we could say that the, the model weights are de-quantized to a higher precision during inference. So yes, that is why we don't see a benefit to inference. In fact, we see a penalty. It's not a big penalty, but there is a penalty. And so, but yes, that's exactly why. Oh, okay. Nice, nice. Yeah, excellent question. Astute one there. And then first one from Manny here. When we're talking about parameters, are we referring to additional features such as Xs in the equation, Y equals predict X1, X2, Xn? Are X1 to Xn considered parameters? What are we talking about when we say parameters? Yeah, parameters, features, it's all numbers, weights. I mean, we have so many different names for similar kinds of objects. I would think of parameters more specifically as the entities that fill up these weight matrices that we use to compute when we're actually doing that matrix multiplication. But yes, I mean, essentially a parameter, a parameter is any node in the, in the model architecture, right? So this is not something that you're going to want to use with like your XG boosts or your, you know, your kind of traditional ML methods, right? It's not like a random floor forest applicable, you know, technique. It's specific to that deep neural architecture. And it's also specific right now to that transformer architecture, though there's no reason it needs to be. It is most explored in that space. Hopefully that answers the question, Manny. Yeah, yeah. Well, we'll kind of flow through some of these other questions and pop back to Manny's questions as well. I think this one's super relevant to everybody. If I don't have a powerful laptop, where can I practice these techniques? Honey, P, it's Colab. Get yourself into Colab. Colab makes it so easy. And the whole benefit of this kind of thing is we can load these very large models with very little resource. And so oftentimes, you can load like a 3 billion or 6 billion parameter model, you can load that in a free instance of Colab right using the free free tier GPU, the T four. So it's I think that's a great way to start if you don't have a powerful laptop. As you get more embroiled in the space, you might look at other cloud hosting solutions, Lambda or AWS, whatever you want. But for the getting started beginner, I would say Colab is your best friend. If you want to, you can pay for compute so you can pay to get a little bit more uh beefy gpus but uh stick to the free tier and and stick with your kind of three to six billion parameter models and you're gonna have a great time yeah yeah yeah yeah stick to the three to six bill quantize quantize quantize quantize and uh and then colab like we we teach entire courses in collab and we do a ton of fine tuning throughout so you know just try to be as efficient as possible don't sit there and do tuning for you know days and days at a time if that's not really something that you're interested in you know use small, try to make the model as small as possible through picking the small size of Hugging Face and then quantization for sure. But yeah, there should be nothing stopping you if you're a beginner. You don't have to get AWS. You don't have to get all these things. Okay, Islam, we got a question that's getting upvoted here. Can I do this fine tuning with Lama CPP? And is this fine tuning possible to plug into the end-to-end fine tuning within a RAG framework? So E2E fine tuning within RAG framework, yes, 100%. The RCAI, we've done an event with them. Their DOM framework and GitHub, we'll get a link for you guys to drop into the chat. That is 100% a great tool that does leverage or can leverage LoRa as well as quantized methods. In terms of Lama CPP, I'd have to double check. I don't know off the top of my head, but I will double check and then we can include that information in a comment if I'm unable to find it before the end of our time together today. Okay. All right. Back to Mandy's next question. We say weights and biases when we talk about ML models or neural network models. So if weights are parameters, are we saying weights and biases that are parameters in the LLM world are weights and biases parameters? Let me think through this question. world are weights and biases parameters? Let me think through this question. We say weights and biases when we talk about LLM. So if weights are parameters, are we saying weights and biases parameters? Like our bias is also parameters? I guess is that the question? No. But yes. I mean, I mean, at the end of the day, the thing we care about is the weights. That's, that's, that's, that's all answer this question. We want to update the weights, aka the parameters. Okay. All right. Good stuff. Then I'm gonna go ahead. Last manny question here. Can you speak about examples of LoRa adapters? Like, what are they? And what are they created for? a tool perspective. So let's say we create a LoRa adapter that's very good at translating natural language to SQL. And then we create another LoRa adapter. And that LoRa adapter has been fine tuned to translate natural language to Python. Then we create another adapter and you you see you can kind of go on that the idea is that whenever we do inference we can choose whichever of those adapters or those laura layers to flow information through that's going to make our output consistent with what we fine-tuned it to do so you can you can think of them as little hats you can put on your model that's going to change its behavior, but it doesn't touch the, it doesn't modify or it doesn't have to modify the base model at all. Just kind of this hat that sits on top of it, but gets it to do a different job. And the idea is that we can choose those hats as we want, even at time of inference, we can choose which hat we want it to wear. Yeah. Yeah. And I mean, you know, this is like the thing for businesses too. It's like, if you think about these adapters, man, it's like they're plug and play. And so if you want the LLM to do something super specific, that prompt engineering has only gotten you so far and you just can't get what you need exactly to get in and out in specific ways with your customer or your user. If you want to really constrain what your user can put in, you want to really constrain what comes out, this fine-tuning piece, this lore adapter piece is going to be like your friend. You know, we had a great meme that we posted on LinkedIn recently where it's sort of like if you're doing fine tuning, you're kind of doing LoRa. So it's sort of like this is a big question. You know, examples of LoRa adapters would be like anything that you fine tuned, you know, you might say and. OK, we've got a couple of minutes left. I'd like to shout out out to you know thanks for the great note just want to appreciate your efforts uh appreciate a lot it looks like we've got uh george i think he's struggling with a specific error maybe we can comment on that after the the event he's he's put his error into slido as well um i guess uh last question this is a big question. So you can take maybe two minutes, Chris, what are the trade-offs of using dimensional reduction techniques like LoRa, QLoRa, PEFT on LLMs in terms of training, inference, fine tuning? Like when you think of trade-offs, maybe best practices here, what do you think of? I mean, the big one is quality or like how good the output is uh there is a trade-off there it's really small and beyond being really small it's really small like so okay this is this is the way i think about trade-offs when it comes to laura and and the crew uh i can i can find you the laura model right to be let's say like 98% as effective as full fine tuning, right? But I can do that in a 10th of the time with a thousandth of the resources, right? So divide by a thousand, the number of resources. I mean, that is a trade-off. There is a trade, you're losing 2%. But like, it doesn't feel like a real trade off. And especially in terms of business value. It's not like a, it's not a real trade off these days, like, especially if you use a high enough R or rank in your your Laura, so you're using that kind of 128 are, you're still getting a massive reduction in compute but you're retaining so much of the performance that it it truly doesn't feel like a trade-off it there is a trade-off to be clear there is always technically going to be a trade-off but it lets you do things you wouldn't be able to do so it doesn't feel like a trade-off i I mean, for small companies, you can fine tune a model that does a whole new novel thing that fuels your business, that is your business, right? That you just couldn't do if you didn't use these methods. In that case, there is no trade-off, right? It's enabling you to do something that was previously impossible to you. That's only advantage. When it comes to inference specifically, possible to you that's only advantage uh when it comes to inference specifically both uh the the Q Laura or any quantized uh method using bits and bytes and Laura if you're talking about non-merged Laura adapters do impart a small inference latency penalty it is. At scale, it can maybe be felt, right? If you're really getting to those hundreds of thousands of requests per second compared to a very efficient model, you might want to re-quantize that to another format and serve that model directly instead of having it part of your LoRa stack. But again, these are problems that come with scale and that scale kind of also helps you fund the solution. But outside of that, you're not going to feel these issues until you're into the six figures or more requests per second for your kind of LLM stack. So I would say there are trade-offs, but when you're getting started, they really don't appear as trade-offs. All right. Yeah. Okay. So use PEFTQ, Laura, unless you got a million requests per second. Sounds like a plan, dude. All right. Cool. Let's go ahead and wrap it up. Thanks, Chris. And can't wait till next time. Thanks, everybody, for joining us today. Again, next week, we'll be back talking inference and serving and how to do it efficiently with VLLM, one of the hottest open source tools out there for doing that. We'll tell you a little bit about the tool and its background. If you like this session, you might also really like cohort four of LLM Ops, LLMs, and Production launching February 13th. In that course, which we're going to be soon announcing an expanded curriculum for, you'll learn to prototype and scale production LLM systems, including using RAG techniques, including fine tuning, and so much more. Check it out in the link. And then lastly, please share any feedback you have on today. You can drop it in the chat or you can drop it in the feedback form. That will drop to you now. And that's it for today. Until next time, keep building, shipping, and sharing, and you know we'll be doing the same thing. See y'all next week.", "title": "Fine-tuning with QLoRA (Quantized Low-Rank Adaptation)", "duration": 3710, "uploader": "AI Makerspace", "upload_date": "20240111", "description": "\u200bGPT-4 Summary: Discover how to supercharge your LLM application development by mastering quantization, a game-changing technique that dramatically reduces the size and computational demands of large language models (LLMs). In our upcoming live event, we'll dive deep into the essentials of quantization, demonstrating how it makes LLMs more accessible and cost-effective for tasks like loading, fine-tuning, and inference on limited hardware. Learn the ins and outs of using the bitsandbytes library to load quantized LLM parameters for our Mistral-7B demos, and explore advanced fine-tuning techniques with QLoRA, building on the principles of Parameter Efficient Fine-Tuning and Low-Rank Adaptation (PEFT-LoRA). Whether you're working on development or in production, this event is your key to speeding up the LLM application cycle. Code will be provided, ensuring you have everything you need to implement these strategies effectively. Don't miss out on this opportunity to elevate your LLM projects!\n\nEvent page: https://lu.ma/quantization\n\nHave a question for a speaker? Drop them here: \nhttps://app.sli.do/event/7CrWMfvZg2NXWh6aYsKkfr\n\nSpeakers: \nDr. Greg, Co-Founder & CEO\nhttps://www.linkedin.com/in/greglough...\n\nThe Wiz, Co-Founder & CTO\nhttps://www.linkedin.com/in/csalexiuk/\n\nJoin our community to start building, shipping, and sharing with us today!\nhttps://discord.gg/RzhvYvAwzA\n\nApply for our next AI Engineering Bootcamp on Maven today! \nhttps://maven.com/aimakerspace/ai-eng-bootcamp\n\nHow'd we do? Share your feedback and suggestions for future events.\nhttps://forms.gle/u63yUJRD9AijuTE98", "datetime": "2024-06-09T20:46:24.159829"}
|
train/transcriptions-f952d775-1e9c-4676-9254-510305708f0e.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/live/XOb-djcw6hs", "transcription": " Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? It sure is, Greg. Yes. And is quantization like really as good and as dope as everybody's talking about? Yes. Emphatically, yes. Emphatically, yes. Man, I cannot wait to see exactly what's going on inside. You're going to show us how to do this today, right? Sure. All right. Let's go ahead and get right into it, man. We'll see you back in just a little bit. Today, we're going to talk quantization. I'm Greg. That's Chris. We're from AI Makerspace. This is a bit of an add on to last week's event, which talked about parameter efficient fine tuning and low rank adaptation. Today, we're gonna take it to the next level and talk quantization. We'll demystify the idea of quantization, and we will also talk about how to leverage the latest in low ink adaptation which is a quantized version of it called QLORA as always we'll be collecting questions with slido so go ahead and provide your questions for us throughout the day at that link and then we'll go ahead and answer as many as we can when we're through with the demo at the end. Of course, we'll have Chris back to lead and wizard his way through the demo on quantization soon, but for now, let's cover what we need to know so that's going to make sense to us. We're going to talk quantization of LLMs today, and we're going to talk fine-tuning with LoRa. This is the main goal. We want to understand and we want to align our aim to really grokking QLoRa and then seeing how we can implement that. We got a little bit of insight into quantization last time when we were loading the model but now we want to take a look at how it can be used to fine tune and some of the background and intuition associated with why this works and what the industry has sort of learned about the precision of numbers within our llms so we're going to talk-tuning quantization QLORA, and then we'll do it. And to sort of contextualize this, similar to last time, we wanna understand that often fine-tuning is coming into play after we do prompt engineering, often after we set up a retrieval augmented generation system. And we wanna now take a look at how we can optimize our large language model, or in other words, how we want the model to act, how we want the input and output schema of the model to be a little bit more constrained, a little bit more dialed in, a little bit less large, a little bit more small. And this is sort of the trend we're noticing as 2024 is upon us now. We are seeing a bigger and bigger interest in smaller, more performant language models and fine tuning is really a key aspect that's going to help us to get there so let's just remind ourselves what we talk about when we talk about fine tuning with peft laura PEFT LoRa. And why we need to do this. You know, when we talk LLMs, they're super big. They have billions and tens of billions of parameters. It's likely we'll see models with hundreds of billions of parameters before too long. Not all models are always getting bigger, but some of them are. And the reason is, is because if we keep adding more text and more parameters, we are pretty confident that our next word prediction will continue to improve. prediction will continue to improve. But as we do this, as we build larger and larger models, as we have to deal with more and more compute in order to be able to handle them, whether that's loading them, training them, fine tuning them, or performing inference on them and serving them. We're kind of abstracting away from the regular developer, the regular individual out there that doesn't have access to a giant cluster of GPUs to be able to even play with these things. And this is the core problem, is that when we go and we want to do full fine-tuning on many, many billions of parameters, this becomes a huge pain for anybody trying to use consumer hardware, any small business trying to just use the laptops that they have, maybe a few resources on the cloud. And this is as true for fine tuning as it is for loading and storing, certainly for deploying these models. It just costs too much. And the solution for kind of dealing with the fine tuning, the storing and the deploying is kind of the same. But today we're focusing on fine tuning. Today we're focusing on fine tuning using fewer parameters. It's all about using fewer parameters. We don't need all of them as we started to get some intuition into last time. And in fact, the ones that we have, what we're going to do today is we're going to take those parameters and we're going to make them smaller in a sense. We're going to make them smaller in a computational sense. This is the essence of quantization. So while it may not be necessarily fewer parameters when we talk about quantization, although it often is when we talk about fine-tuning, we're just trying to move these big, big, big models towards smaller packages through fewer parameters and through more efficient representation of those parameters. And we saw last time, we saw that LoRa is the number one PEF method you should know. It's called low-rank adaptation. And the big idea of LoRa, as we discussed, was to fine-tune using factorized matrices. And again, we didn't focus on fine-tuning absolutely everything. We did fewer parameters. That was great because it was more efficient. And we found out that we could actually leverage LoRa adapters for many tasks. So you could have one big, big model and a ton of different lower adapters and deploy that to production. Deploy each of those adapters to production because at inference is when the adapter would actually come into play. So very, very flexible, very good technique for. Larger companies and industry, especially that want to just have many adapters and larger companies and industry, especially that want to just have many adapters in one very powerful model, we'll probably start to see this emerge as an approach to AI development in the enterprise. And, you know, it's really comparable to fine tuning, full fine tuning. full fine-tuning. So, you know, we saw, in essence, that fine-tuning is all about modifying behavior of LLMs to update parameters. Parameter-efficient fine-tuning is all about fine-tuning with fewer parameters. Low-rank adaptation was all about fine-tuning using factorized matrices. And so parameter-efficient fine-tuning through low-rank adaptation is all about modifying behavior by updating fewer parameters using factorized matrices. So this all sort of flows together. This leads us directly to our new friend, quantization. And this meme is so good, I had to put it twice, because it's such an oft misunderstood idea. Certainly has taken a long time for me personally to really try to grok this thing. So let's see if we can break it down in a way that makes sense to all of you. First off, the weights of our LLM, when we talk about weights, it's the same thing as when we talk about parameters. Okay. So parameters, I might say weights, we're still talking about parameters. Those parameters are simply numbers. They're just numbers. And specifically, they're floating point numbers, They're floating point numbers, also known as floats. And it's important to understand a little bit of the detail here, because this is the essence of what we're doing in quantization. When we talk about floats, you may harken back to your days in school, maybe chemistry, back to your days in school, maybe chemistry, where you learned about significant figures, sig figs, everybody's favorite, right? And then if you're like me, you become an engineer and you don't care anymore, ever again. But I was a mechanical engineer. If you're a computer scientist, computer engineer, maybe you continue to go deeper. And these days in AI, if you're a developer, you need to continue to go a little deeper. Because this idea of a float is cool, this integer with a fixed precision, we can talk about representing, for instance, 12.345 as 1, 2, 3, 4, 5 times 10 to the minus 3. And we can then do this by using a specific number of bits in our computer. When we talk about this precision, this fixed precision, there's a number of different types of precision. What we're going to generally be using is what's called full precision when we're doing computations that are kind of default computations. Full precision means that I have 32 bits to represent my floating point number. And they're broken up into a couple different pieces here, but the big idea is that there's 32 bits. And the question is, is that the right amount when we want to go and deal with 70 billion parameter models and things like that? And it turns out that in machine learning, we found sort of over time through experiments that if we didn't use 32-bit precision and instead we used 16-bit precision, Instead, we used 16-bit precision, essentially half precision, to again, simply represent those decimal numbers that are inside of each of the neural network, that represent each of the neural network weights, sort of each of the neural network perceptrons is a way you could think about this. Then what we're finding is that we can get almost identical inference outcomes from our LLM. Because remember, we just want the words that come out at the end. We just want the ideas that come out of that. We just want the outputs. We don't necessarily care about the precision of the stuff within the black box. We put in, we get out. And a lot of people were seeing this. A lot of researchers were seeing this with the large language models, that if we just leveraged half precision we can get very very good outcomes and what this does is this effectively halves the entire model size so what are we saying we're saying that we can sort of get exactly the same thing coming out coming out, even if we represent each of the model weights using half as much information we can think about. Because really, I mean, how many sig figs do we need? And another way we can talk about moving from a 32-bit down to a 16-bit representation is we can say we are quantizing. We quantize the 32-bit weights down to 16-bit. weights down to 16 bit. Hence quantization. Now, when it comes to quantization, there are many different approaches to quantize model weights. So, this is very important. We're not going to cover every single approach because that's not really necessary for what we want to discuss today. But there are many different ways to quantize model weights, and we hope to continue to bring you more content on ways that are a little bit different in terms of their actual implementation and the nuances in future content but for today we're going to focus and use this Q-Laura idea as a focusing lens now Q-Laura starts the story begins with a paper called 8-Bit Optimizers via Blockwise Quantization. And this was a paper that came out of the University of Washington. Tim Detmers was the lead author, and he's been quite a superstar in the field.'s he's kind of like the quantization guy and in this paper they showed that you can use 8-bit representations and maintain performance that we're seeing at a level of full precision or 32-bit. So here we see in this kind of early paper, again, one of these pieces of work where they're saying, hey, look, experimentally, we're seeing that if we reduce the precision, we can still get great results. And this is not reducing it to half precision it's reducing it to quarter precision 32 down to eight and this bits and bytes approach this bits and bytes paper turned into what became the bits and bytes library which has since evolved and is something that we'll see the Bits and Bytes library, which has since evolved and is something that we'll see Chris use today, and it's something that gets used all the time now. Now, Bits and Bytes, you can go ahead and recall that one byte is equal to eight bits. We're going to continue the discussion in bits today, but you'll see many papers and discussions of things that will talk in bytes as well. So pretty simple to understand why the library was named bits and bytes. Now, again, this is one approach. And so there are some trade-offs as there are with any approach. For instance, when we use the bits and bytes approach to quantization, we're not really getting any additional benefits to our inference latency. We're not really speeding up inference a whole lot by using this particular approach to quantization. However, what we are doing is we're leveraging a tool that gives us very flexible use of those LoRa adapters, right? So for enterprise, if we're thinking about how do I have one big model and just have a bunch of adapters, this is going to be our friend. And this is why we choose to focus on this one today. And this bits and bytes library forms the basis for what comes next. It kind of forms the basis for this QLORA idea, this efficient fine-tuning using quantization. And the fine-tuning using quantization from the QLORA paper, the big bang box takeaway of this is it's super great, even though it's eight times less precise. less precise. So what we actually have going on in QLORA is we have not an 8-bit representation, but we have a 4-bit representation. And so what completely insane. And we can fit all of that on a single 48 gig GPU, single 48 gig gpu which is like just kind of incredible it's just kind of it's kind of mind-blowing that we can do this and so this q laura paper is essentially coming and saying hey hey, listen, we've got this idea that we can do fine-tuning using a four-bit approach versus even a half-precision approach, and we get amazing results. And so this is the essence of what's going on here with QLORA. the essence of what's going on here with QLORA. And so what we can kind of think about is if we go back to this idea of PEPF-DLORA fine-tuning, where we're modifying behavior by updating fewer parameters using factorized matrices. And we add this idea of quantization, where quantization is simply representing high precision numbers with low precision. Then we get to this place where we talk about PEFT-QLORA fine-tuning, where we talk about PEFT QLORA fine-tuning, where we're modifying behavior by updating fewer quantized parameters using factorized matrices. And so the process as outlined in the QLORA paper and the process that you're going to see today is something like this. We download the model weights. Anytime you download model weights from Hugging Face, they're always going to be in full precision, 32-bit. Then we load our parameter efficient fine-tuning model into GPU memory. Anytime we load into GPU memory for inference or training, we're going to be loading using that parameter efficient fine tuning method. And then we'll initialize our low rank adaptation, our LoRa configuration. And finally, and this is the key, this is the key to the whole thing, is that during training, what happens is we have the full precision 32-bit model, and we're going to actually load the 4-bit model, quantize 32-bit down to 4-bit, for training. Quantize 32-bit down to 4-bit for training. Now, during training, we're going to flow through the network, and we're going to, as necessary, each time we have to do a computation, each time we have to calculate something during our training process, we're going to de-quantize that 4-bit representation back up to a 16-bit half-precision representation. We're going to do the calculation, and then we're going to re-quantize back down. And at each step of our training or fine-tuning, we're going to quantize, de-quantize, move on. So we're never holding that half precision fully in our GPU memory. But rather, we're simply using half precision to do the calculations. This is the magic of what's really going on behind the scenes. And it turns out this works incredibly well. And again, that intuition behind the 16-bit piece is that we saw that for inference, you can go from 32- down to 16 bit and get very very good results we saw this experimentally over a lot of time not just papers from the university of washington but also papers from many other researchers and this q laura approach fundamentally Fundamentally, is to load those full precision weights into GPU memory as quantized 4-bit weights. And then only de-quantize up to 16-bit during calculation. Back down as it moves through. All right. So this is the core approach that we're going to see today. You're going to see things like this. This is the bits and bytes configuration. And you'll notice when we want to load in, we want to load in in 4-bit. You're also going to see a data type called NF4. Chris is going to talk a little bit more about it. It's very important. It's very essential to the QLOR approach. And that's it for the big ideas we need to really see how this build can be taken to the next level. So what we wanna do is we wanna take the same build that we've already looked at, the old UNO reverse card build, given the response, predict the instruction. We want to use the same model that we saw last week because it's still one of the best out there. Mistral 7B instruct V0.2. And we're going to use the same data for fine tuning. Just keep everything simple. That Alpaca GPT-4 data set is there. So again, output response, predict input instruction. And with that, we're ready to kick it back over to Chris, the wizard, to show us how to do fine tuning with PATHQ, Laura, and fill in some additional details. Wiz, over to you, man. Q Laura and fill in some additional details. Wiz over to you, man. Oh yeah, thanks Greg. Really appreciate it. And guys, I'm excited because quantization is definitely one of my favorite topics. It is the kind of like one of the best things we could do right now. And as you can see, we only used around 20 gigabytes of GPU RAM to train this 7 billion parameter model, which is quite impressive in my lens. That includes fine tuning. In any case, we'll get right into it. First of all, we're going to be using Mistral 7B Instruct V02. This is just Mistral's most recent Instruct tune model. I love it. And we're going to now move on from PEFT, which we discussed last week, into the Q in QLORA. So we discovered or we discussed, you know, the idea of how we can reduce the number of parameters that we train. But now how do we reduce the size of the parameters that we train? Now, first of all, what is quantization? Greg already talked us through it. I'm going to give a brief overview here of what's happening under the hood, and then we'll get into how to implement it in code. Spoiler alert, it's super easy. Thanks, bits and bytes. But let's look at what quantization is from this perspective. So quantization is a process of discretizing an input from a representation that holds more information to represent a representation with less information right that's crazy so the idea is we want to express more information with less information so how do we actually do that well in the tim detmer's q laura paper they rely on this process called blockwise k-bit quantization which which sounds, you know, like, very, you know, scary, but it's not so bad. It relies on two very important things. One, it relies on the fact that in neural networks, the model weights are mostly normally distributed. So as soon as we, if you're, if you're coming from a stats background, as soon as you hear that word normal distribution you you know your your eyes should light up uh you know we're we're going to be able to make use of a lot of very clever tricks uh to help us do whatever we're trying to do um and then it also relies on this idea of the nf4 format which which is a number format or data type created by Tim Detmers and team, which is information theoretically optimal. Now, not literally, it was proven this is not literally true, but it is empirically, for all intents and purposes, this is a fact that NF4 is very, very efficient, which is excellent. So how does this work behind the scenes, right? So, okay, we get it. Model weights are normally distributed. That's great. So what we're going to do is we're going to essentially put a pin in the number line that is near to the mean, right, of our desired numbers, which are going to be in a distribution. And that distribution is going to be normal, right? And then we're going to kind of use that mean as a zero point. And we're going to use this NF4 data type, which is a zero centered number format to represent the numbers that appear around that specific point in the number line. So there's a step that needs to take place here. We're going to normalize all of our numbers to be within a specific range of minus one to one. And then we're going to be able to have this idea of a saved place on our number line that we're going to understand a range around. And that's really about it. Now, it's a bit simplified and it's definitely, you know, you can look at the paper for the math. It's great. But the idea is that we have, we kind of drop a pin in the number line and we have this NF4 number format, which represents a range around that point to the number line. And that is what's going to build up the buckets or bins that we're going to use to represent our numbers. And the reason this works so well is again, because of the fact that model weights are normally distributed and because this is an informationally, theoretically optimal data type for that minus one to one range. So this is specific Yennefors for that minus one to one range for normally distributed, to one range. So this is specific, the n of four is for that minus one to one range for normally distributed, well, distribution. So that means the only reason this works is because of this first fact, right? Now, beyond just that, QLORA does an extra step. So you might have thought to yourself when I said drop a pin in the number line, right? Well, okay, if we drop a pin in the number line, that's all well and good, but doesn't that mean that we have kind of like a high precision number, right? It doesn't have to be as high precision perhaps, but it's definitely still high precision. And that's true, right? That pin we drop is high precision. Well, it can be used to represent many numbers. In this case, you know, 64 numbers from the QLORA paper. So each pin is associated with 64 numbers. Tim Demers and crew said that's not enough. You know, that's going to give us 0.5 bits per parameter of overhead, right? So we need to go bigger. So what they did is they actually took all of those quantization constants. That's the technical term for that pin that we're dropping, right? We take those quantization constants, and then we also quantize those. So we represent our quantization constants in an 8-bit format, and we do 256 of those for every 32-bit precision number. So we have one 32-bit precision quantization constant that sits on top of 256 8-bit quantization constants, which sits on top of each of those sits on top of 256 8-bit quantization constants, which sits on top of, each of those sits on top of 64 4-bit. So you can see the savings in terms of memory here is insane, right? We're able to represent so much of our data in that 4-bit representation. And we're also able to do it in a way that retains a ton of information. And that is key. I saw some questions in the YouTube chat kind of concerning, you know, what's the trade-offs here? What's the performance gains? And there definitely is some when it comes to latency. We'll discuss those as we move through the rest of the notebook. But in terms of the actual effectiveness of the model, the performance hit can be very small. It is not zero. There is a performance hit, but it's incredibly small, which makes this a very effective technique, especially when applied in the way we're going to see it applied today. So that's basically what we're talking about when we talk about this idea of QLora, right? We're talking about dropping a pin on the number line and then saving kind of numbers or representing numbers around that and then doing that one more step abstracted which is harder to visualize but there it is okay so how do we do it in code now right uh well first of all we gotta load our our kind of familiar usual suspects here so we're bits and bytes data sets accelerate uh the laura lib library transformers and peft uh these are all kind of staple libraries we're bits and bytes data sets accelerate the Laura lib library Transformers and peft these are all kind of staple libraries we're going to be using when we're using these uh kind of Q Laura tools and then we're going to grab our model and the model we're going to grab is the Mistral AI Mistral 7B instruct v 0.2 it's the most recent uh instruct model for Mistral it's a great one and then this is kind of uh you know where the magic happens this is the bits and bytes config uh this is from the bits and bytes library we're gonna see that we load in four bit so this means when we actually move our model from those saved weights uh that exist on our on our drive, when we load those into our GPU, we're going to load them in that four-bit quantized state, right? So that's that collection of numbers and then their quantization constants and then their quantization constants because we're using this use double quant, right? If we omitted that use double quant, we would only do one step, and then we would be saving less effective memory. We're also going to be using the quant type of that NF4 I talked about. That's the Tim Detmers and crew created number type, which is information theoretically optimal. Again, not literally true, but it's close enough, so we'll keep saying it. And then we're going to have this idea of a compute D type, which is going to be torch B float 16. Now this is very important, right? So when we store numbers in 4-bit, that's awesome. But when we try to compute with them, it's really bad. It's actually quite bad, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? If you think about when you multiply two numbers together, especially if they're kind of small, right? We usually wind up with a number that is relatively needs more precision to fully accurately understand it, right? When we divide 100 by 1000, we wind up with a very, you know, a small number. And the idea is that we'll need more precision to represent that very small number. So what we do with the QLORA approach is we actually de-quantize whatever we need to compute with our weights. Now, this is done at a per-tensor level. So we never have the full model de quantized in memory, just one tensor at a time, right? So this saves us a ton of a ton of space. And it also lets us have the ability of computing as if we have this model in that higher precision or B float 16 format, right? Which is huge. So we're saving tons of space and then we're de-quantizing. So we also retain some of that compute precision. And that is what lets this method really shine, right? The fact that we de-quantize for computation and then we store in 4-bit. I think without that, this would be a less powerful method. But with that, it's amazing. You can choose up to full precision here. Obviously, that is going to come with some small memory overhead. You do have to upcast a tensor to the full precision, but it's negligible compared to the size of the model. And it does also, and this is critical, it does come with some inference and training latency overhead, right? The fact that we have to de-quantize and re-quantize, de-quantize and re-quantize, this means that we're performing an additional operation per computation. And so that is going to impact inference. Now, Tim and team have written some great kernels for this. So it's not very slow, but it is going to be slower than if we weren't doing that extra operation. And so this is one of the key trade-offs, right? We had questions about trade-offs. One of the key trade tradeoffs with Qlora and with the bits and bytes approach is that it is extraordinarily flexible. It is very powerful and it works very well with a PEFT adapter methods. So like LoRa and others, but it does cost us a little bit of inference latency in training time. So that's important to keep in mind. Once we have our bits and bytes config loaded, all we have to do now is just load our model like we normally would. So auto model for causal LM from pre-trained. We're gonna pass in our mistral AI model. We're gonna pass in our quantization config. We're not gonna need the cache and we're gonna map this to auto, which is gonna shove as much as it can into our GPU. In this case, again, because the actual model loaded only takes up about 15 gigabytes of GPU memory, it's all squeezed into the GPU there. So that's great. We do some pre-processing on our tokenizer to make sure that it's set up in the right format for training. And then we can look at our model architecture. You'll notice that we have this four-bit layer, right? This four-bit layer is where that bits and bytes comes in. You'll see that we have the four-bit layer on our QKVO proj as well as our MLP. So it's all four bit, all the way down. This is the idea, right? We don't want to just quantize some of the model. We're gonna quantize as much of it as we can. However, you will notice that we omit some of the layers, specifically we omit our layer norms. And the reason we omit our layer norms is we know that our layer norms. And the reason we omit our layer norms is we know that our layer norms are going to tend to a very, very small number, you know, near zero. And we're going to run into some training instability issues if we use lower precision to represent these layers. So we're actually going to keep those in full precision. Now they're very small compared to their weight matrix counterparts, but we do want to make sure that we're keeping those layer norms in a higher precision. This is to avoid training instability issues, right? If we have these numbers kind of diverge and cause a ruckus, right? We're not going to be able to train very well. And so that's why we don't see those four-bit layers here. Now that we have our model loaded, we can see that it's in four-bit. We're very happy about that. It's time to peftify it. We talked about peft last week, so we're not going to spend too much time on it today, but the idea is fairly straightforward. We are going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to use our LoRa config to set up our rank. Our rank is going to be 64 in this case. We're going to set our alpha, which should be by conventional wisdom, about twice your rank. Though you're, you know, again, it's always worth doing hyperparameter searches here to make sure you have the most optimal hyperparameters. Your LoRa dropout, pretty consistent value. Your bias is none. Task type is causal, because that's what we're doing. You'll also notice that we have our QVK proj modules. We, again, with QLoRa, we want to target as many modules as we can, right? The QLoRa paper's wisdom is that we should actually target all possible layers of LoRa. In this case, we're just going to leave it up to PEFT to simplify things a bit for us. For our base model, all we have to do is prepare our model for k-bit training. This makes sure that we can train and that all of the trainable layers are set appropriately and that any frozen layers are also set appropriately. And then we're going to get our PEFT model and our PEFT model is going to uh give us those laura layers now you'll notice that we have only 2.7 million trainable parameters out of a possible many billion trainable parameters right and the key thing about q the q and q laura right is well is great, when we make each of these parameters one eighth the size, right, we're effectively reducing this by another factor of about eight. It's not strictly eight because of the fact that it doesn't interact with all layers, but the idea is it's about eight another factor of eight reduction in the uh in the total size of parameters that we have to train which is insane right it's uh we we went from kind of we're already at a fraction of a percentage and then we even further reduce uh the amount of actual uh work that we have to do, which is great. And then we can see here that our LoRa layers are also 4-bit, right? We have our LoRa layers are 4-bit as well as our actual, you know, regular layers that were converted to 4-bit. After that, we're going to load some data. We're just going to grab the Apaka GPT-4 data. We're going to do this Uno reverse card train, just a fun one. It's kind of like the classic now. I think this is what you're going to see. Whenever you do an instruction tune, it's just fun and it really proves the point that the process works. So we're going to ask the model to take a input and then generate an instruction. So we're going to create a model that's good at generating instructions. We're going to use this generate prompt helper function in order to create these prompts that our model will be trained on. And then we're going to set up our trainer. Our trainer, this is all boilerplate. The other big insight from the QLora paper is this paged Atom W32 bit optimizer. I'm not going to go too into it here, but the idea is that this idea of using paged memory is really, really effective, and it helps us train very stably and very efficiently with very little cost to us other than we have to flag it. The rest of this is all boilerplate, right? It's good boilerplate, but it is boilerplate. And we are going to make sure that we have our BF16 equals true, which is going to make sure that our compute D type is compatible when we upcast, which is necessary. It says CUDA, but would a Mac suffice to fine tune the model to the 4-bit? I would recommend a GPU, a NVIDIA GPU for sure. The kernels are written for it. I believe you can use 4-bit on other devices, but it's not necessarily going to be as efficient or as fast. The optimization of the kernel really added some speed to this process but i'll get back to you uh more about that uh after a little bit of digging to make sure that you can do this on mac even if it is going to be slightly less efficient uh we're going to use the trl trainer the sft trainer from trl in order to train our, our max sequence length of 2048 just for Mistral itself, and then we can train this using trainer.train. At the end of the day, we reload our model, just a quirk of path. We reload it. We make sure we load it in 4-bit, and then we have our torch D type for float 16. That's the compute D type again. And then we are going to, you know, look at the model. So we say in instruction, identify the odd one out among Twitter, Instagram, and Telegram. That's great. That is, that's an instruction that would result in this, in this, you know, in this kind of the odd one out is Telegram response. And you can see the ground truth is identify the odd one out is telegram response. And you can see the ground truth is identify the odd one out. And if we look at the base model, we can see that the base model's instruction is much less good. It does not even mention telegram. And so, not a very good instruction. But that is it for me and the code demo. So with that, I will pass you back to Greg who will wrap us up. So with that, I will pass you back to Greg. We'll wrap us up. Yeah, thanks, Chris. That was awesome as usual and love that deep dive explanation on exactly what's happening within the quantization method in the QLORA paper. So today we saw building off this PEFT-LORA approach, Today, we saw building off this PEFT-LORA approach, that PEFT-qLORA fine tuning is really about modifying behavior by updating fewer quantized parameters using factorized matrices. So this idea of using fewer parameters and of using the LoRa factorized matrix approach. This gets us from 3.8 billion down to 2.7 million parameters, less than 1%. And then we come in with quantization. This is technically blockwise k-bit quantization, effectively just allowing us to express more information with less. And the key to the QLoRa method is that from that 2.7 million parameter level we're coming in and we're starting to actually quantize that down to four bit before we we begin training during training we will de-quantize when we have to do computations and before re-quantizing to continue the training process. Next week, we are going to be covering how to not fine-tuning and loading, but now serving an inference with VLLM. So we hope you can join us for that one. But for today, we're going to go ahead and get started with the Q&A period. I'd love to invite Chris back up to the stage. And if you guys have questions, it looks like Manny is crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, crushing it in the Slido right now. So shout out to Manny as usual. But if you guys have questions, throw it in the Slido. We'll also try to get to your questions if you throw them in the YouTube live chat. But Chris, let's go ahead and jump right into it here. First question. Is the reason we don't get inference latency benefit with QLORA because model weights are re model weights are retained as 32 bit during inference. I mean, I, yeah, I mean the question, uh, to be more specific about, uh, the phrasing, I think we could say that the, the model weights are de-quantized to a higher precision during inference. So yes, that is why we don't see a benefit to inference. In fact, we see a penalty. It's not a big penalty, but there is a penalty. And so, but yes, that's exactly why. Oh, okay. Nice, nice. Yeah, excellent question. Astute one there. And then first one from Manny here. When we're talking about parameters, are we referring to additional features such as Xs in the equation, Y equals predict X1, X2, Xn? Are X1 to Xn considered parameters? What are we talking about when we say parameters? Yeah, parameters, features, it's all numbers, weights. I mean, we have so many different names for similar kinds of objects. I would think of parameters more specifically as the entities that fill up these weight matrices that we use to compute when we're actually doing that matrix multiplication. But yes, I mean, essentially a parameter, a parameter is any node in the, in the model architecture, right? So this is not something that you're going to want to use with like your XG boosts or your, you know, your kind of traditional ML methods, right? It's not like a random floor forest applicable, you know, technique. It's specific to that deep neural architecture. And it's also specific right now to that transformer architecture, though there's no reason it needs to be. It is most explored in that space. Hopefully that answers the question, Manny. Yeah, yeah. Well, we'll kind of flow through some of these other questions and pop back to Manny's questions as well. I think this one's super relevant to everybody. If I don't have a powerful laptop, where can I practice these techniques? Honey, P, it's Colab. Get yourself into Colab. Colab makes it so easy. And the whole benefit of this kind of thing is we can load these very large models with very little resource. And so oftentimes, you can load like a 3 billion or 6 billion parameter model, you can load that in a free instance of Colab right using the free free tier GPU, the T four. So it's I think that's a great way to start if you don't have a powerful laptop. As you get more embroiled in the space, you might look at other cloud hosting solutions, Lambda or AWS, whatever you want. But for the getting started beginner, I would say Colab is your best friend. If you want to, you can pay for compute so you can pay to get a little bit more uh beefy gpus but uh stick to the free tier and and stick with your kind of three to six billion parameter models and you're gonna have a great time yeah yeah yeah yeah stick to the three to six bill quantize quantize quantize quantize and uh and then colab like we we teach entire courses in collab and we do a ton of fine tuning throughout so you know just try to be as efficient as possible don't sit there and do tuning for you know days and days at a time if that's not really something that you're interested in you know use small, try to make the model as small as possible through picking the small size of Hugging Face and then quantization for sure. But yeah, there should be nothing stopping you if you're a beginner. You don't have to get AWS. You don't have to get all these things. Okay, Islam, we got a question that's getting upvoted here. Can I do this fine tuning with Lama CPP? And is this fine tuning possible to plug into the end-to-end fine tuning within a RAG framework? So E2E fine tuning within RAG framework, yes, 100%. The RCAI, we've done an event with them. Their DOM framework and GitHub, we'll get a link for you guys to drop into the chat. That is 100% a great tool that does leverage or can leverage LoRa as well as quantized methods. In terms of Lama CPP, I'd have to double check. I don't know off the top of my head, but I will double check and then we can include that information in a comment if I'm unable to find it before the end of our time together today. Okay. All right. Back to Mandy's next question. We say weights and biases when we talk about ML models or neural network models. So if weights are parameters, are we saying weights and biases that are parameters in the LLM world are weights and biases parameters? Let me think through this question. world are weights and biases parameters? Let me think through this question. We say weights and biases when we talk about LLM. So if weights are parameters, are we saying weights and biases parameters? Like our bias is also parameters? I guess is that the question? No. But yes. I mean, I mean, at the end of the day, the thing we care about is the weights. That's, that's, that's, that's all answer this question. We want to update the weights, aka the parameters. Okay. All right. Good stuff. Then I'm gonna go ahead. Last manny question here. Can you speak about examples of LoRa adapters? Like, what are they? And what are they created for? a tool perspective. So let's say we create a LoRa adapter that's very good at translating natural language to SQL. And then we create another LoRa adapter. And that LoRa adapter has been fine tuned to translate natural language to Python. Then we create another adapter and you you see you can kind of go on that the idea is that whenever we do inference we can choose whichever of those adapters or those laura layers to flow information through that's going to make our output consistent with what we fine-tuned it to do so you can you can think of them as little hats you can put on your model that's going to change its behavior, but it doesn't touch the, it doesn't modify or it doesn't have to modify the base model at all. Just kind of this hat that sits on top of it, but gets it to do a different job. And the idea is that we can choose those hats as we want, even at time of inference, we can choose which hat we want it to wear. Yeah. Yeah. And I mean, you know, this is like the thing for businesses too. It's like, if you think about these adapters, man, it's like they're plug and play. And so if you want the LLM to do something super specific, that prompt engineering has only gotten you so far and you just can't get what you need exactly to get in and out in specific ways with your customer or your user. If you want to really constrain what your user can put in, you want to really constrain what comes out, this fine-tuning piece, this lore adapter piece is going to be like your friend. You know, we had a great meme that we posted on LinkedIn recently where it's sort of like if you're doing fine tuning, you're kind of doing LoRa. So it's sort of like this is a big question. You know, examples of LoRa adapters would be like anything that you fine tuned, you know, you might say and. OK, we've got a couple of minutes left. I'd like to shout out out to you know thanks for the great note just want to appreciate your efforts uh appreciate a lot it looks like we've got uh george i think he's struggling with a specific error maybe we can comment on that after the the event he's he's put his error into slido as well um i guess uh last question this is a big question. So you can take maybe two minutes, Chris, what are the trade-offs of using dimensional reduction techniques like LoRa, QLoRa, PEFT on LLMs in terms of training, inference, fine tuning? Like when you think of trade-offs, maybe best practices here, what do you think of? I mean, the big one is quality or like how good the output is uh there is a trade-off there it's really small and beyond being really small it's really small like so okay this is this is the way i think about trade-offs when it comes to laura and and the crew uh i can i can find you the laura model right to be let's say like 98% as effective as full fine tuning, right? But I can do that in a 10th of the time with a thousandth of the resources, right? So divide by a thousand, the number of resources. I mean, that is a trade-off. There is a trade, you're losing 2%. But like, it doesn't feel like a real trade off. And especially in terms of business value. It's not like a, it's not a real trade off these days, like, especially if you use a high enough R or rank in your your Laura, so you're using that kind of 128 are, you're still getting a massive reduction in compute but you're retaining so much of the performance that it it truly doesn't feel like a trade-off it there is a trade-off to be clear there is always technically going to be a trade-off but it lets you do things you wouldn't be able to do so it doesn't feel like a trade-off i I mean, for small companies, you can fine tune a model that does a whole new novel thing that fuels your business, that is your business, right? That you just couldn't do if you didn't use these methods. In that case, there is no trade-off, right? It's enabling you to do something that was previously impossible to you. That's only advantage. When it comes to inference specifically, possible to you that's only advantage uh when it comes to inference specifically both uh the the Q Laura or any quantized uh method using bits and bytes and Laura if you're talking about non-merged Laura adapters do impart a small inference latency penalty it is. At scale, it can maybe be felt, right? If you're really getting to those hundreds of thousands of requests per second compared to a very efficient model, you might want to re-quantize that to another format and serve that model directly instead of having it part of your LoRa stack. But again, these are problems that come with scale and that scale kind of also helps you fund the solution. But outside of that, you're not going to feel these issues until you're into the six figures or more requests per second for your kind of LLM stack. So I would say there are trade-offs, but when you're getting started, they really don't appear as trade-offs. All right. Yeah. Okay. So use PEFTQ, Laura, unless you got a million requests per second. Sounds like a plan, dude. All right. Cool. Let's go ahead and wrap it up. Thanks, Chris. And can't wait till next time. Thanks, everybody, for joining us today. Again, next week, we'll be back talking inference and serving and how to do it efficiently with VLLM, one of the hottest open source tools out there for doing that. We'll tell you a little bit about the tool and its background. If you like this session, you might also really like cohort four of LLM Ops, LLMs, and Production launching February 13th. In that course, which we're going to be soon announcing an expanded curriculum for, you'll learn to prototype and scale production LLM systems, including using RAG techniques, including fine tuning, and so much more. Check it out in the link. And then lastly, please share any feedback you have on today. You can drop it in the chat or you can drop it in the feedback form. That will drop to you now. And that's it for today. Until next time, keep building, shipping, and sharing, and you know we'll be doing the same thing. See y'all next week.", "datetime": "2024-06-09T19:37:57.768795"}
|
train/transcriptions-fa8b13f0-6440-4646-8d0c-cd15cf6d3679.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"url": "https://www.youtube.com/watch?v=EeZIKQmWSXg", "transcription": " Hey, whiz. Hey Wiz, so if I'm a super beginner trying to get into fine-tuning, should I use Hugging Face and Peth Library or should I maybe pick up Mistral Fine-Tune instead? Hugging Face is probably great, yeah. So is it like a fundamentally different method that is being used for fine tuning between like a peft laura and the approach we'll see today and mr fine tune no no it's the same same thing under the hood yeah same same okay okay so is it a quote lightweight code base that enables quote memory efficient and performant fine tuning on mistral models at least yes absolutely it's that yes is hugging face also a lightweight code base that enables memory efficient and performant fine tuning on mr the light the lightweight we can quibble about for sure okay but the But the rest of it, absolutely yes. Okay, okay, okay. But it does the thing. It did the fine tuning, right? It did, yes. Okay, okay. So we're going to sort of try to assess today if this thing provided a, quote, simple guided entry point to fine tune mistral models. And, of course, we can quibble about simple and guided, but it did the thing today, right? It did the thing. So, you know, it does the thing that it says on the 10 and here we are folks, another day, another tool. Welcome to the open source LLM edge, everybody. We're going to dive in and get to the bottom of the concepts and code behind Mistral FineTune. I'm Dr. Greg, that's the whiz, and we are co-founders of AI Makerspace. We're excited to dive into this new tool, and by the end of today, you'll sort of recall what powers and underlies fine-tuning throughout the industry, not just open source tools, but even a lot of the closed source tools that you might have your hands on today. Of course, if you have questions along the way, please use the Slido. We will get to questions probably throughout this event. This is going to be kind of a discussion heavy one. So keep the questions coming in the Slido. And also if you've got questions that are super relevant to the discussion we're having at the moment, YouTube live. All right, everybody, let's get into it today. We're going to go ahead and kick off fine tuning. Mistral 7B with Mistral Fine Tune. And aligning ourselves to today, we want to really make sure that we understand the legend, Laura, that's at the core of all of the fine-tuning that we see. We want to understand how to use Mistral FineTune. We're going to show you how to do that. We're going to do some instruct tuning with it. And we want to compare and contrast what we saw with this new library to what we're comfortable with, what we're used to with Hugging Face's parameter efficient fine tuning library and methods like LoRa and QLoRa. So we'll start with a review and then we'll dive into what we're seeing from Mistral Fine Tune, talk about Hugging Face versus Mistral FineTune. Do some fine-tuning and we'll again discuss throughout. So everybody, Laura, let's review here. First off, fine-tuning. What are we talking about? Well, we're talking about modifying modifying the behavior of an LLM by updating the weights of the neural network, the weights of the transformer. And full fine-tuning, it means updating all of the weights. But full fine-tuning because these things are so large is often quite infeasible for the average Joe for the GPU poor out there like we are and like we know many of you are and so we need a better way and the better way that the industry has really adopted is low-rank adaptation. And this is now not full fine-tuning, but rather fine-tuning only part of the neural network, part of the transformer, and using a factorized matrix approach to do so. Let's recall back to the OG paper here. October 2021 light years ago, quote from the abstract, as we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Absolutely classic. Hence, we propose LoRa, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the transformer architecture, meaning each attention layer within each decoder block, thus greatly reducing the number of trainable parameters for downstream tasks. Okay, hold on, say what? Freezes the pre-trained model weights and injects trainable rank decomposition matrices. Hold that thought. We're going to do some stacking and then we'll discuss. Mistral FineTune, just released, says, Mistral FineTune is a lightweight code base, memory efficient, performant fine tuning. It is based on LoRa, a training paradigm where, quote, most weights are frozen and only 1% to 2% additional weights in the form of low-rank matrix perturbations are trained. Low-rank matrix perturbations. Okay, so we've got training paradigm, 1% to 2% additional weights in the form of low rank matrix perturbations. That's how Mistral is talking about it today in May, 2024. And the guys from Microsoft that wrote the Laura paper talking about it in 2021 said freezes the pre-trained model weights and injects trainable rank decomposition matrices. Okay, so let's sort of cover a little bit of terminology before our discussion here. One of the things that really inspired the authors of the LoRa paper was a paper written in December of 2020 called Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. And so here we can see a quote from this paper. Common pre-trained models have a low intrinsic dimension. So there exists here a re-parameterization that is effective for full fine-tuning and can be as effective as the full parameter space. So this whole idea of this re-parameterization, that is where LoRa comes in. And of course, we're using a low rank adaptation approach. So it's important to understand the idea of matrix rank. The easy way to sort of understand this is to think about a very simple problem where we have a bunch of columns in our data set, and we're thinking about having a number of linearly independent columns. This idea is very common for anybody that studied matrix algebra. And so we can kind of think of how many features, how many columns are actually giving new information in sort of a broad way. We can sort of contextualize this idea of rank. How much of the information is actually important to, let's say, pay attention. And when we think about another classic matrix algebra principle technique that's used here, it's just decomposition. So we're decomposing a problem into its constituent parts, thereby breaking a difficult computation into simpler tasks. So all of this taken together from the idea of an intrinsic dimension, to the idea of low rank, to the idea of matrix decomposition, to the idea of trainable injected decomposition matrices to low rank matrix perturbations. We're going to sort of wrap all this together in a discussion now. I'd like to invite the Wiz back up to talk through this. Wiz, I think you did a pretty cool video on this idea of Laura quite some time ago. And I wonder if you can kind of just give us your overview, your thoughts, like with the diagram that gets shown in everybody's presentation it's the one on the first page of the paper as you look at this maybe you can walk us through what you see and you know maybe define some of these letters along the way for us yeah so i mean basically this is just a very fancy way to say that you you know, as we train our model, right, we can represent, so think of it this way. Our model has two, two real quote unquote real components, right? One is this idea of a, uh, you know, base weight matrix. And the other is this idea of a, you know, update weight matrix. Now, typically these are not like, we don't need to pull these apart. And in fact, we wouldn't because it adds a lot of, you know, overhead where we have to add them back together and everything like this. But the idea is that because we can represent our weight updates as a separate update matrix, right? And then we can lock in those base pre-trained weights, we can then take that delta matrix, right? And represent it in this low-rank, you know, product matrix form. We have these two matrices that will give us our actual, you know, weight update matrix. So the key insight here is that the base model weights are different than the final fine-tuned weights, and that difference is some delta weight, right? And we can represent that delta weight with this low-rank form. And the idea is we're going to pay computational overhead for this because we have to keep factoring together these matrices and then adding them back. But it's worthwhile to spend that little bit of extra compute in order to save a massive amount of required GPU memory. So while the training is the fine tuning is is is slower, we're adding latency to our training. Right. We massively reduce the amount of actual parameters that we need to hold in memory, which means we can train these models on much smaller than previously used, you know, hardware. And that's the crux of it, right? By training A and B and then factoring them together and adding them back to that base weight matrix, what we're really doing is we're figuring out what's the best, you know, form for A and B that results in weight updates that make us good at our task. So there you go. Okay. Okay. So it's really important to understand then that this is actually only important during training. Is that right? Where we're sort of actively updating the weights. So that's a great thing that you've mentioned. So no, well, kind of. So the fact that we can as a low-rank form means that they are very lightweight, and it means that, you know, well, if we can add these quickly to our base weights, you know, then, well, at inference time, actually, we can just add whatever one we want. So say we had Mistral, for example, and we fine-tuned it to do summarization, right? Well, we'd have like an adapter, right, which is just the LoRa weights that we could apply to that base model to make it good at summarization. But let's say we also fine-tuned it on a math task or a, you know, translation task. Well, at time of inference, we can choose which adapter to use. So it is very important for training, but we can also leverage it in order to make inference more quote unquote powerful. Okay. Okay. Yeah. So we can swap out these low rank adapters at inference, but what we're doing during training is we're essentially like like plugging in an empty adapter and sort of uh training it we're calibrating it to the sort of uh thing we want it to be able to do right i mean this is ultimately when we're done training do we have then an adapter or do we have like a model that's fine-tuned? So because we kept our base model frozen, we never actually touched it, right? We still have the base model. It still exists, but we also have this artifact that we've created that we commonly refer to as an adapter, right? Which is just the LoRa weights. And now as long as that base model doesn't change, we can carry those adapters around and then use them like a bit in a drill, right? Whenever we have that base model, we can use those adapters. So it's important to understand in that, exactly, as long as the drill is the same or the base model is the same, we can use that bit or adapter anywhere. We don't have to save the base model every time. We can keep downloading it, we can download it when we need it, yada, yada. We can move this all around. It's fine. But the base model has to be literally exactly the same, right? Or else the bit won't fit. Ah, yes, yes. Okay. It's got to be the same drill, right? Yes. Yes. Okay. It's gotta be the same drill, right? Yes. Exactly. Yes. Yes. Yes. Okay. So, or at least like the same little claw on the end of the drill. So, okay. So then there's this difference in language between if you read the paper and if you read the Mistral FineTune thing. Can you talk a little bit about this trainable rank decomposition matrix versus matrix perturbations idea why are we getting this sort of um differentiation in language now is where's the perturbation idea coming from exactly it's just a difference in language i mean it's the same it means the same thing so it's not something separate. When we're training a weight matrix, right, we are perturbing the weight matrix. So when we, when we update our weights, we are wiggling it about, right? Or, you know, a fancier way to say wiggling it about, of course, is just to perturb. Perturb, yes. per turn per turn yes but there's no difference the language is just uh fancier you know it's it's it's got more college words in it so it's talking about that delta w that little change in weights that we're then sort of decomposing into a times b matrix here and so um so then just you know as we sort of think about the other couple letters on this chart here. Okay. Yeah. So I've got X by D. And can you tell me what these dimensions are and why they're relevant here? And then H as well. Yeah. So X is just the, basically when we're talking about, so, okay. The idea is that we have some base matrix and that base matrix is some you know d by d matrix our initial input is x and then our changed input is h right so all that this is saying is we have some d by d matrix which is represented on the left by that big blue square we turn that into a d by r r by d matrix set and then we concatenate those resultant matrices so we do we we get the product matrix of of a and b and then we concatenate it with or we just you know plus it to uh our pre-trained weights which are in form d by d and of course thanks to the way that the matrix math works out r by d times d by r is going to result in a matrix of size d by d so their sizes all match up x is our input and then h is going to be our output from this process X is our input, and then H is going to be our output from this process. All right, all right. So X is basically heading into the transformer. That's what sort of X represents, this X by D. It's sort of the embedded and positionally encoded information, and it's flowing through the block. And then H is sort of the output of, is this the attention layer or is this the output of the entire block here? So this is actually pretty interesting. So we can use LoRa where, where so ever there is a matrix. It doesn't have to be just the attention mechanism. It can be in the MLPs. It can be anywhere that there's a big matrix that we don't want to be big, but instead wants to be small. So, you know, in the initial Laura paper, we did see that we only applied it to specific subsets of the weights in the model. Specifically, we applied it to QV, I believe, if I'm remembering correctly, but we applied to it only some layers. Now, we're very judicious. We apply it to basically everything, right? Where we're going to apply it to the MLPs, which is the feed forward network. We're going to apply it everywhere we can. Right. In fact, with things like QLora we found, that's actually even better. It results in better models at the end of the day. But the idea is this is, Lora is not a process that's unique to attention, that's unique to, you know, specific layers in the transformer architecture. you know, specific layers in the transformer architecture. Now it is it's useful because the transformer architecture is specifically large language models are so huge and they have this property of intrinsic load dimension. So we can't just use this in like any old matrix. But for transformer matrices, yeah, we can just we apply it pretty judiciously. We just slap it in it pretty judiciously. We just slap it in there. Okay. Okay. And, and I mean, let's go back to like the whole name, right? And we say lower, lower, lower, but it's low rank adaptation. So it really is just sort of technique that can kind of now even be applied much more broadly than we thought in the initial paper. Is that right? I would say probably the application space is the same. Large language models is where we're going to see this the most and in other kind of like larger uh larger models where we've trained things so much that they have this property uh you know the matrices are so huge the data is so plentiful but uh yes it is it's it's a specific the way we apply it has evolved or what we apply it to within that system has evolved even if the actual you know crux of the application is the same which is that's useful for lms it's not very useful like you know for your for your smaller networks or for like uh you know uh things like really small bert you know we're not gonna be thinking about this too much okay okay okay yeah because it's all about reducing the number of trainable parameters and like if we've got a consumer grade gpu and we can do like a relatively complete fine tuning on a pretty small model we'll just do that we don't need laura right it's it's really all about making sure that it aligns to the gpu power of the consumer today, for the GPU poor of us out there, right? All right. Sounds good. Thanks, Wiz. We'll come back to you to show us how to do Mistral FineTune. And speaking of Mistral FineTune, let's take a look a little bit closer at the specific library here. So what we can see with Mistral FineTune is it is this lightweight code base based on lower end, blah, blah, blah. Now for maximum efficiency, it's recommended that you use a big daddy GPU. The code base is optimized for these kinds of things, but for smaller models, we can use a single GPU. And then that's kind of the way we're gonna show the fine tuning today now they did provide a note here on the repo that the goal is to provide a quote simple guided entry point to fine-tune Mistral models this is what we're trying to test out today we'll see what you guys think as well did they achieve achieve their goal yet, or is there still work to do with Mistral FineTune? So they walk us through a number of methods that they can use for fine-tuning, a number of types of fine-tuning, specifically at first in the repo. They say, well, you can do pre-training that has sort of continued pre-training. You can do instruction tuning and you can do function calling. Now, these are all fine tuning. OK, so pre-training is fine tuning, continued pre-training. Instruction tuning is fine tuning. Tuning for function calling is fine tuning. And again, they're all using the LoRa approach under the hood. Now to sort of get this done, it's very simple order of operations, similar to what you would see in any other fine tuning library, prepare the data set, verify it, start the training, make sure your config is right, and then test it out by doing inference. Now they they did also sort of note, hey, you can easily plug in 1B to this. And we went ahead and did that today because, you know, why not? Let's try out all the features and goodies here. When we looked at 1B, we were specifically looking at training loss, evaluation loss, and evaluation perplexity. Although there's a number of other things that Wiz will show you is kind of available if you're linked up to 1B to look inside the black box as you're training. Okay. Now, when we think about loss, remember, remember everybody, like, you know, how do we calculate loss? Well, we're going to use cross entropy. Now to go much deeper on cross entropy, you know, join us again next week when we're talking logits and loss. We're going to go back down deep into the transformer and we're going to talk about how to exactly do these sort of weight updates during training associated with the loss function now the other thing that Mistral FineTune allows you to do and this is sort of an open question is this super valuable or not is it allows you to leverage their mixtural models the mixture mixture of experts models. And this is directly from the repo here. A best practice for mixtural models is that they're like, man, you really should train mixtural models a couple of times independently, because depending on the seed that you use during training you can actually get a really really high degree of variance between instantiations of fine-tuning of mixtural models and I've got a quick discussion point here that I want to bring whiz back up for just in terms of the mixtural Is there a reason why we're not fine tuning mixed role today was it seems like it's cooler. It's newer. Is it like harder or something to do this? What's the deal? It's not harder. It's just it I mean, in fact, it's the same. It's just, you know, it's just fine tuning. Nothing, nothing changes. But the mixed role models do, you know, they have a GPU requirement that exceeds the possibilities of the CoLab environment. So, you know, remember, Mixtral doesn't require a ton of active weights for inference, but it does require a lot of weights to be loaded in GPU memory, right? So even though when we're doing inference, we're not touching all those weights, we need to be able to in order to have all of the correct paths available to us through that model, which requires us to have a larger GPU memory capacity, even if we're not going to be using as many as we're doing inference. The inference is still zippy, still fast, but we have to have that capacity to hold the big model and all those available paths in it. That's right. That's right. And as we said before, you can use LoRa on not just the attention layer, but you can also use LoRa on, like you mentioned, the feed forward layers. And for everybody sort of trying to recall exactly what Mistral kind of looks like and how it's sort of different, you know, from an architectural perspective, that feed forward network layer is replaced with the sparse mixture of experts layer, right? So you're saying you kind of have to hold each of these kind of mini neural networks here, feed forward network one, two, three, et cetera, et cetera. You got to hold all of this in memory even if you use injectable trainable low rank decomposition matrices you still have to hold all this there and and that makes it more computationally intensive and remember we not only have to have those low rank decomposed matrices we also need to have those those base matrices those big honking uh frozen weights which are going to take up all of our capacity right so it's a the the adapters take up very little space thankfully but we gotta load all of this into memory so that every path is available right like it's like if we imagine that each of these, you know, feed forwards is the equivalent of like a door, right? We have to have all the doors available to us, even if we're not going to go through all of them every time, because we might need to get to a different room the next time we go through, right? So we have to have them all there, even though we're not going to use them every time we do we do any kind of uh forward pass um okay yeah yeah yeah makes a lot of sense okay so literally like the more experts you have the more compute you you're just you're forced to use even if you're fine-tuning even with laura even if you're forced to use even if you're fine tuning even with laura even if you're quantizing it just scales out with the number of experts that's right okay all right very cool all right then uh we're gonna set this up so we're get just about ready to rock and roll into the demo today guys instruction tuning with mistral 7b is going to be based on first of all some instruction tuning data that we've grabbed off of the shelf and we're just going to use the dolly 15k data set so this is available directly on hugging face this is sort of a classic data set that's got a lot of different categories of instructions closed question answer, classification, open QA, information extraction, et cetera, et cetera. And so it's sort of a broad perspective view. Now, we're not going to use all 15,000 data points for fine-tuning, and we're just going to do a few hundred iterations. But this will give us a feel for what the difference is between the model that we use, the base model, and how well it does with our instructions after we fine tune it. Now, we're going to use Mistral 7B Base V3. The only difference between V2 and V3 is like so many models today, that sweet, sweet long context window. So it's up to 32K, 32, 768 to be exact. And that's the real distinction from the V2. So with that, I'm going to pass it off to the Wiz to show us how to go through Mistral Fine Tune to do some instruction tuning on Mistral 7B. Take it away, man. Yes. Okay. So this is pretty straightforward. Thanks to this library. However, it does require, you know, we'll talk about it. So first thing we got to do is grab some dependencies, pretty standard stuff. So, we're going to go ahead and we're going to grab our menstrual fine tune, which is the repository, which can be found here. The repository has great instructions. It has a tutorial that doesn't work currently, though I'm sure they'll update it. And the basic idea here is pretty straightforward, right? We need to get the model, do some stuff. We're going to walk through the notebook. So we'll get the repository, we'll cd into it, and we'll install all the requirements that we need. Easy peasy. You can ignore these dependency conflicts in the Colab environment, not worried about it. Then we need to download the model. We're going to download the Mistral 7B v0.3. As Greg said, this is a long context model. However, you know, keep in mind that because we're doing this in a co-lab environment, we're not going to be taking advantage of the long context. You know, it's just not possible to do in the Colab environment, so we're not going to do it. If you're using the recommended equipment, which is a, you know, a node of GPUs, you're going to be fine. But the idea is that we're going to use this 7B v0.3, still a great model, we love to see it. And then we're going to extract that model into a Mistral models folder. Easy. Now, the next step that we have to think about is we have to think about formatting our data set into the correct format. We're going to do instruction tuning. So we're going to follow the instruction tuning guidelines that they give us in their repository. As you can see, the notebook kind of just, you know, this is a JSONL file that we need with, you know, this key messages, which has a list of messages. The messages need to have a role in content, right? And this is very typical if you've seen fine tuning before where we have this role in system, we have this content in the system prompt, and then we have our role user with their content user prompt, and then our role assistant with the content response. And that's it, right? This is a pretty classic example of fine-tuning. And we, you know, it's easy enough to create this JSONL file. You do have to make sure that your data is in this specific format. So it is important that you've contorted things into this format, or else you will not find success, unfortunately. Now, we're going to be using some data from the limit, less is more for instruction tuning. We're specifically going to be using Instruct V1, aka Dolly HHRLHF. And this is the data set that we're going to be using today. It's a fairly standard data set, pretty classic, right? From back in the day, it feels like. The idea is we have some instructions, we have some responses, and we're going to train the model to get good at following that instruction task. And that's the idea. Okay, so in order to do this, we're gonna first just create a data directory to shove all our data into. We're gonna cheat a little bit here. We're gonna use Huggy Face Hub. Instead of just pandas, Huggy Face Hub is just easy, easy to use, right? The dataset format is familiar and great. We're gonna go ahead and use our notebook login because if you're using this dataset, it might require accepting a EULA. And in order to make sure we've done that, we'll need to prove that we are who we say we are on Hugging Face. Then we're gonna load our dataset, which is from Mosaic ML, Dolly HHRLHF. It's a very fun time. The, you know, the best part of this, uh, you know, Dolly H H R L H F, uh, data set is that it's simple, straightforward. So it's easy to contort it into what we need it to be. As you can see, it's not in a, uh, format that, uh, you know, uh, Mistral expects currently. It's in fact's in fact definitely not in that format right so we have to contort it we're going to create a simple formatting function that does that we're going to create the expected format in a column called data where we have our messages which is a list of messages that contain key role with the role and key content with the content. And away we go, easy peasy. We're just going to make sure that our formatting function works. We're going to do that by testing it on one of the samples. And we're going to go to our messages. We have system, blows instruction, designs, perfect. And then our user, what is Kangen water? And then we have this explanation. Very cool, very cool very cool okay so we map our mistral fine-tune format function over the entire data set training and test we can see now that we have this data response with about 60,000 prompts and then we have our test set with about 5k prompts nice and good we're going to save those as json l files since that's what the menstrual fine tune library currently expects and we can just write these out we're going to dump the data into that json l file and separate them with new lines that's where the json l comes from right json lines so every line is a new JSON object. And we can do that for our test as well. We're going to call our test eval because we're not actually going to do testing with it. We're going to evaluate during training with it, which is always fun. But it's just called test by default in datasets. So we're going to leave it there. Now we need to verify the dataset. And we're going to leave it there. Now we need to verify the dataset and we're going to enter into what I believe is the current kind of, I guess it would be, I would call it a shortfall of this particular library in its current form, right? So we're going to run these reformat datas. First of they, they error silently for the most part. So if you're not, your data is not in the correct format, they might just not say anything. Uh, if your data is in a recognizable format that doesn't work, then they will complain, which is what we want. That's ideal. Um, and they do try to reformat, but as they call it in the repo, right? If you have some exotic data, this isn't going to do it, right? You need to kind of do the work to get the format into the shape that is desired by the library. This is not new or specific. You know, it's not specific to Mistral FineTune. Now, the next piece is our training. It's our YAML file. So instead of using kind of, you know, like those long args lists or, you know, a bunch of parameters, we're going to use this idea of a YAML file. And the YAML file is going to dictate everything. The YAML file is going to dictate everything. So first of all, if we look at their repository and we look at all the different cool hyperparameters we have, sorry for all of the training, but we have a bunch of cool hyperparameters, right? We've got all kinds of stuff. Checkpoint frequency, we've got log frequency, we've got rank, we got it all, right? We're going to express all of this in this.yaml. at all right um we're going to express all this in this dot yaml now it it's not necessarily like the best uh thing in the world but it works and that's what we want so first of all we need to set up the data part of our yaml file which we're just going to pass in our data a header and then we're going to give a instruct data an eval instruct data tag that we pass our uh you know the paths to our training and eval data easy peasy then we go to our model path for our model id or path which is just going to point to the downloaded model that we created then we're going to create some hyper parameters classic hyper parameters we've got lower rank, sequence length, batch size, micro batches, max steps, learning rate, weight. It's got it all, right? Well, it doesn't have it all, but it has a lot of things. And this is one of the limitations of this particular strategy. It doesn't have it all, right? If we look at the actual kind of options that we currently have available to us, it's not everything that we're used to if we're coming from another library. However, it does make sense, and it works, and that's great. Now, you'll notice that the sequence length being used here is 4K. This is because we have a limited amount of GPU memory. We want to keep it relatively low. So where we might be able to get away with something higher in the 7 to 8k range, we're just going to keep it at 4k to make sure that we're not blowing through our memory. Our LoRa rank is going to be 64, you know, dealer's choice. We just can't make it too high or else we'll run out of memory and of course we're only going to do this for 300 steps so we're not going to fully train on our data set that would take a very long time we're going to start a learning rate rather rather high and then we're going to decay it you know at a pretty pretty standard rate i think from the chinchilla paper and then we'll put our output directory to this content slash limit test. And then we just have to convert this to YAML format. So we do that here. You'll also notice we have some other parameters that we can set like seed, how often do we log? How often do we eval? You know, are we going to do eval or not? How often should we save a checkpoint? And then save adapters. So remember that because we're doing this adapter fine-tuning, we need to be able to save those adapters periodically, right? So we're not actually training the model. It's very, it's silly to say because we're definitely training the model, right? But we're actually training these adapters, and the adapters modify the model. And so this is the idea, right? We want to save those adapters, or those two broken out matrices, we want to save those as we're going through training, right? And then our run directory is just going to be where we save this run. We're also going to integrate weights and biases, like Greg said. Weights and biases is an easy integration, which is we just have to, you know, provide these options. Our mistral fine tune is what we're going to call the project. The run name is going to be dolly instruct. We're going to provide our API key, and then we're going to write these things to our YAML. We're going to use offline equal to false. You know, there you go. Then we're going to save all of that out to a YAML file. And we can then use that YAML file to validate our data. And what happens here is that there's a script that validates all of our data. Data is correctly formatted. Stats for the data. We get all this cool stuff. It also gives us, in a very fun way, an ETA. So how long this might take, right? Which is pretty cool. And you love that. So we validate the test. We see that everything's no errors. We get no errors twice in a row. No errors twice in a row probably means that there's no errors, which is always ideal. So now that we've done this, we can go ahead and start our training. Training is very straightforward. We just need to make sure because we're in Colab that we provide these additional environment variables so that we target the GPU in our Colab environment. And then we're going to make sure there's nothing in the test folder. And then we're going to run our torch run with our training script from mistral fine tune and then we're going to go ahead and point to that same yaml that we just created above that we use to validate our data so that's great we love to see that i see a question in the chat what do you number of steps mean in this context that's just the number of iterations that we're going to run through so uh you know when it says our our file here right we're doing sequence like uh with our batch size and number of micro batches so a number of steps is going to be the number of times that we repeat uh an iteration on a batch which contains eight micro batches so that's the idea you can see that it's currently training now. We train it beforehand and then we're doing another one just to show off the wand B integration. Very cool. So let's look at wand B. Wand B, as you can see, that's from the completed run. This is from the run that's currently ongoing. So you can see that we have a bunch of different interesting things being tracked. And if we look at something like our training loss, we can see that we have this slowly declining training loss, but it's very noisy. Our learning rate is being decayed, as we would expect. And then we actually just finished a eval in, we'll do many more of. So how will this look at the end? Well, this is an example of the completed run where we have all 300 of our steps. You can see that our perplexity goes down, our training loss, our evaluated training loss goes down, and our eval loss goes down. This is the expectation, of course. As we train, loss go down, a very classic example, right? This is the idea with the 1B integration. This is all just done for us. We don't got to do anything. You love that. So now that we're done training the model, what we have to do? Well, we've got to go ahead and use the model, right? So we're going to go ahead and use menstrual inference to do this. Menstrual inference is menstrual's version of, you know, how to do inference with the Mistral models, unsurprisingly. We're going to go ahead and we're going to load our tokenizer from the downloaded model. We're going to load our model from the downloaded model. And remember, right, the model is the same. We just need those adapters. So then we load our LoRa adapters from our training directory and then we can send it a request very similar to how we would open AI very very convenient and then we're going to tokenize our request generate request request and print some results you can see that our results are very straightforward machine learning is a subset of artificial intelligence allows computers to learn from data without being especially programmed i mean it's great right it does the thing it follows the instruction the instruction was to explain machine learning to me in a nutshell so it did great uh and that is mistral fine tune a library that helps us fine tune mistral models uh don't forget to like comment subscribe hit subscribe, hit the bell. It helps us out. We're here every Wednesday, having a great time talking about cool stuff. So thanks. I'll pass you guys back to Greg. Thanks, Wiz. So that's Mr. Fine Tune. And the last thing that we'd like to sort of point out is, so, you know, how are these two things different? We'll kind of kick off our discussion here. Let's remind ourselves that full fine-tuning, the sort of problem with it, is that it's really not cool if you're GPU poor. And so the Hugging Face libraries use these parameter efficient fine-tuning methods that are just simply better than full fine-tuning. Same problem, right, that it's trying to solve. The number one PEFT method is LoRa. That's the one you should know. And if you're a beginner, as we mentioned in the beginning, you should probably still start there. But Mistral FineTune does do the thing. Their CDN, their content delivery network, is rather slow. It took nearly an hour, maybe 45 minutes to download the model. Their opinionated data formatting is going to give you some potential issues if you have complex data formatting and remember mixtral is simply a more computer intensive thing to deal with not to mention you need to do multiple runs because of the nature of the way the mixtral models work aligning with their best practices in the repo and And then LoRa just sits at the bottom of everything. You can do it on attention layers. You can do it on multi-layer perceptrons, feedforward networks. You can do it at inference. You can plug in the adapter. You can plug in the empty adapter and calibrate it during fine tuning. And so make sure that you do have a handle on the concepts beneath the code. And that is Laura. To kick off QA, I'd like to go ahead and invite Wiz back up to the stage. One more little discussion point. So as we think about Hugging Face versus Mistral Fine Tune, what jumps out to you as similarities and differences that people should keep in mind? Yeah, I mean, they're both used for fine-tuning models. They both will fine-tune models, so you can fine-tune models with both. You'll love to see that. Otherwise, I mean, the differences are quite superficial. It's doing the same stuff under the head. superficial is doing the same stuff under the head. Transformers had a long time, right? To, to, to polish this out, to build things that, that work exactly the way you expect and have all of the bells and whistles we've come to love about that, that kind of a library, right? And, and Mistral is just getting started. So I imagine that over time, you know, this Mistral fine tune will evolve into a solution that makes a lot of sense and is quite useful. For the time being, I think, you know, they're on the path. It's a good first couple steps in that direction, but the ease of use is just not there yet, in my opinion. Okay. All right. Yeah, it takes a long time to create really, really clean, user-friendly products. And, you know, Mistral's putting out a bunch of stuff these days. Look forward to seeing what they continue to put out as a true competitor to OpenAI, it seems, across the sea. All right. So we are going to get started with Q&A. We got a solid 10 minutes, everybody. We've got a QR code up in the top of your screen that you can add questions to and upvote questions you like best. I'm going to go ahead and kick it off with the first upvoted question today. Can we combine these adapters? I mean, training one to program another for medical and combine together? Let's just sort of talk about combining adapters, I guess. Yeah, I mean, you can model merging exists and basically is that, uh, so yes, the answer is just a simple yes. Um, we, we can't do that. Yeah. Yeah. And model merging is basically like add them together, right? Like, uh, these perturbations, let's say these injectable rank, injectable low rank decomposition matrix. Perturbations to the weights. That's what we're sort of adding together when we do the model merging. And we do have some model merging material that we've gone through recently with the creator and with RC on our YouTube channel. Check that out. Next question. So can we invent a multi-adapter instead of multimodal? How does multi-adapter fit into multimodal? And I think this is sort of a different question here, baked in here, Rami, and have one adapter or have one adapter as a router. Maybe we'll take those one at a time. So multi-adapter for multimodal. Yeah. So probably won't be for multimodal. It's not like each adapter will handle a separate modality, but it is the case that we can create a multi adapter system instead of multiple models. Um, but, uh, in terms of like getting a vision model or like a audio model as an adapter to like a language model, it's not going to work. We need to have that image modality or language modality as part of the model already. Um, uh, and then having one adapter as a router, having one model that we build a router for that uses the right adapter. Yeah, sure, absolutely. That's possible. We might use like a simple classification model on the head of it in order to route it to the correct adapter. But that's still a space that's being explored very much by people well i mean that kind of that kind of you know reminds me of the idea that within the repo we have the function calling capability and of course when we talk about fine tuning for function calling, we can very easily sort of imagine a router being used in a more agentic way, right? And so I think one of the key things that I want everybody to kind of take away that maybe isn't obvious to everybody is that function calling is just another form of fine tuning it just requires what a more specific formatting whiz that's basically it that's it yeah okay all right so uh what's the best gpu to buy uh that's a here's a good one for you liz uh what's the best gpu for small scale industrial application industrial application 4090 just get a 4090 uh you know it's a great it's a great card 3090 will also do 390 ti i think is the 24 gig uh card uh you don't need to spend you know like enough for like a to a6000 you don't need to so So yeah, basically just accumulate cards that have 24 gigabytes of GPU RAM, and whatever flavor is cheapest to you and then go from there. And just stack them together till you have your little 24 gig card cluster. Okay, so Don, Don asks, isn't YAML less likely to handle data format issues well compared to JSON? So we're only using the YAML for like the configuration file. Everything else is in JSON or JSONL and like the data is held in JSONL. We're just using YAML as the config. But yeah, it's just like a choice. YAML, config, name a more iconic duo. I can't. I can't name one. Yeah, yeah. Okay, can we do this without 1B? I know the answer to that. Yes, you can. It's definitely optional. Any other comments on that? Would you like 1B? Yeah, 1B is great. It's free. It works. The, the real, the, the, the real thing to say is like, you should just use one B because it's free and it's great. Yeah. It's funny. Cause we were having the same discussion in class last night. Like, why should we use one B? Why oneb? It's like, I think that's a good enough reason. Yeah, it's free and it's great. Okay, another question from Rami here. Any guide sessions or scripts to prepare and test a standard data set for LAMA, Mistral, or other LLM fine tuning data set formats? I think this is a data set formatting question. I think this is a dataset formatting question. I think probably point you to specific fine-tuning events. We've got a fine-tuning playlist that, you know, if we did Llama, you got to put in Llama format. If we did Mistral, got to put it in Mistral format. We've done other ones like OMO and, you know, a few others as well. I would check out our fine tuning playlist. Anything else to add there, Wiz? No, I think that's a great place to start. It's just a lot of like reading and working, but you'll get it quickly. If we thought like a dataset formatting event would slap, we would do one. I just, I'm'm not this is the first time i've heard that feedback if you guys want it let us know we'll put it together how does the choice of optimizer like adam stochastic gradient descent impact the performance of a fine-tuned laura model is there like a right answer for optimizer? The right answer is Atom or a version of Atom. Just straight up, that's what everyone will use, does use. They have like paged versions. They have fused versions. They have all kinds of fun kernel optimizations that make it very zippy and very quick. So Atom is basically where we're at. Hmm. The here's an interesting question. So since we brought up this attention layer versus MLP layer fine tuning, which one's better? Which one should I do? fine tune attention layer or fine tune MLP? Why do it all? Would you? Yeah, I mean, I guess. You know, target either or if you really wanted to but uh like intuitively like attention feels like the place to start but i don't know we we'll do all of it just because it's it's recommended thing to do right it's uh it's easiest and it's the lowest uh memory and we're gonna be fine to be very clear we're gonna be fine tuning those layers no matter what it's just whether or not we're doing full fine tuning or laura adapter fine tuning that's different but they're they're gonna get fine tuned uh it's adapter fine tuning that's different but they're they're gonna get fine-tuned uh it's we have to so there you go boom there it is that's a great place to wrap up thanks wiz for showing us mr fine tune today and that brings us to the end of today's event don't forget to like and subscribe and ring that bell if you like this session and you're not in our Discord yet, you should definitely join. We got great community vibes going. And I'd really love to see folks that join also consider building and deploying their first ever LLM application. This is the Beyond Chat GPT event that we put together a while ago now at this point. And it's something that we require for everybody that takes our AI engineering boot camp. So if you're up for a challenge, I would encourage you to see if you can build and deploy your first LLM and share it in Discord in the Build Ship Share channel. There's a ton of awesome activity going on all the time with folks building their very first application. Now, if you really want to accelerate your AI engineering learning, you might check out our AI engineering bootcamp. We've got a lot of great, cool, fun, interesting announcements coming soon. We just launched cohort three, cohort four is in August. So you can start planning for it now. If you want to learn with me, with a great group of peers, AI engineers, leaders, and many others, as well as get access to really high quality opportunities to get in front of hiring partners based on your certification. Consider this as a pathway in 2024 for you. Next week, we talk loss functions for our logits and loss event, all on training and fine tuning. We're going down deep into the transformer again. So join us for that one. And finally, provide any feedback that you have. We take it seriously and we try to improve all the time. As always, in the meantime, we will do our best to keep building, shipping, and sharing. And we hope that you do the same. Thanks, everybody. Have a great rest of your week, and we'll see you all real soon. Bye, guys.", "datetime": "2024-06-09T20:20:27.378549"}
|
train/transcriptions.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|