prompt
stringclasses
1 value
systemHow to Finetune Llama3-8b in Google Colab for Freeuserassistant Everybody training new large language models are training them out the box for chat. A chat trained LLM is like an intelligent student who finished general education in 50 languages. Now think of fine tuning these LLMs like kicking the broke general ed student out of the house and choosing exactly what specialized degree they will get. In this video, I'm going to demystify the concept of fine tuning a language model. No programming experience will be needed to follow along this tutorial. I'll be showing how to fine tune Metas latest LLM 3 8 billion parameter model on free GPUs in Google Colab. Let's discuss a huge problem with implementation of every team of AI agent project we have seen to date. It started with Auto GPT and now the current leader in hype crew AI. The reason most teams of AI agents that people are creating to attempt to accomplish complex tasks sucks is because they are using raw large language models trained to respond as intelligent chat bots and not as action models. Out of the box, these models are trained to have decent general intelligence and serve a general audience like chat GPT. As an AI software engineer, the first step I would take to develop a system to outperform any of these current existing AI agent swarms is to train a decision making model based on first principle reasoning. I think it's also important to reason from first principles rather than by analogy. The normal way that we conduct our lives is we reason by analogy. We're doing this because it's like something else that was done. Hold up, wait a minute. And what that really means is you boil things down to the most fundamental truths. Okay, what do we eat as possible is true. And then reason up from there. That takes a lot more mental energy. Think of each response from an LLM as a thought we want this intelligent AI to generate in 10 seconds or less. Instead of expecting our AI to use first principle reasoning, break the prompt into multiple needles in a haystack of facts the model needs to understand or tasks the AI needs to complete all in a single response. In this video, I will fine tune a llama model to power the first AI agent of contact in a team of AI agents. I want the AI agent to be able to use first principle reasoning to output highly logical order of steps that need to be completed to actually provide a factual response and better yet automate a complex task. Before we get into fine tuning a model, let's first demo raw llama 3 8B's ability to accomplish just the task of generating first principle reasoning outputs. If I go try to get llama 3 8B to act as a decision model, the reasoning is as though it came from a mind who wasn't trained to make actions in the real world. Tell llama to respond with just a Python list and it constantly adds notes before or after the list, making the responses unreliable as commands for a Python program or even other AI agents to process. We can even try these same tasks on llama 370B. The chat tune model still cannot manage to output the correct response format reliably. Now let's look at the responses from llama 3 8B fine tuned on a tiny but high quality data set that I created to show the model exactly how I want it to respond to agent prompts. Fine tuning on just 40 parameters in this case allowed the model to break out of thinking it is just a chatbot limited to generating text. Now llama 3 thinks freely about what tasks would need to be accomplished by an AI to actually accomplish my instructions. Despite half of them claiming to in the title, all of the video fine tuning tutorials I have found do not show how to fine tune on your own data set. In this video, I will show you how to create your own data set to fine tune on instead of using some pre-existing data set. The data set you use for fine tuning is about quality and not quantity. Since we are training a specialized model, we want to take full control of maximizing our data sets response examples quality on exactly the task we need it to work at. As my data set, I have a JSON file called data set dot JSON. Inside this file, I have one long list of dictionaries. Each dictionary consists of the same system prompt I am trying to get the model to properly respond to as well as the prompt for the input value and the response as the output value. To create my data set for each response, I used mixture 8 by 22b to generate rough draft responses. Before adding any of these responses to my data set, I will go through and manually edit each to improve upon the quality and ensure perfect formatting as a Python list. Your data set for fine tuning could be 20 examples or thousands of examples. While larger fine tuning data sets can improve upon your model's performance, I can't stress it enough the importance of adding only high quality examples to your data set. Each example in your data set is an example showing Lama 3 what you expect a perfect response from that input should be. So if you are fine tuning on mid data, expect the quality from your fine tune model to be mid. If you want a copy of my data set to skip making your own for this tutorial or just have a copy of my data to add on to for your own fine tuning, it's available in the pro learning docs channel of my discord for anyone with an AI Austin pro membership. With my data set of 40 examples complete, I now am ready to start loading them into my collab notebook and start fine tuning Lama 3. Check the comment section for my pinned comment with the link to the Google collab notebook that I will be going through in this video. Once the notebook loads, I can drag my data set dot JSON file into the main content folder. Then I will select my runtime type to use a free T4 GPU and save it to start the runtime. Once it is up and running, click the play button on the first code block and step one to install the needed Python libraries for a T4 GPU. I'll run step two to import the libraries into my runtime. Once the installations and imports complete, we'll run this next one line block to log into our hugging face account with the right access token. If you don't have a hugging face access token yet, you can get one for free by logging into your account, going to settings, clicking access tokens and create a token with right access granted. Copy that and paste that into the field to log into your hugging face. In the next block, we have some Python code that loads our data set dot JSON file and converts our examples into Lama 3's correct template format. You'll see the hugging face underscore user value is set as my username. Make sure you change this to your actual hugging face username. Our next code block will set up our configuration settings for the fine tuning. The fine tuned model variable sets the name you want to save the model as in your hugging face repository. So feel free to change this too. We can run the configuration settings block now and the next block to load the Lama 3, 8B, QLora and trainer model. Now we'll run the trainer to start the fine tuning process on our data set. You'll see the trainer going through multiple training steps before completing. Each training step is a batch of our training data being ran. Each batch in the training steps, you'll see this training loss number start to drop. When our model is training, it is going through each of our prompts from the data set and blind generating what it expects. Our example response in the data set is training loss is a value to represent the difference between the model and trainings predicted response to the example response in our data set. A lower training loss value means that the predicted outputs during fine tuning are getting closer to the responses in our data set. This code with its current configuration settings runs one epoch. One epoch equals one pass through of our entire training data set. Running more epochs up to a certain threshold will absolutely allow your model to achieve a lower training loss during fine tuning. Going back up to your Lora configurations, you can change the num train epochs variable to the number of passes through your data you want it to run. Now there is a few things to note before changing this. Increasing this number will increase memory usage, meaning you can only raise it so much on the free colab GPUs before the runtime will fail from exceeding memory. Another consideration is that the benefits of raising the epochs is diminishing, meaning at some point running more epochs will not decrease the training loss value. The ideal number for my training data set was about 15 to 20 epochs before the training loss was practically staying the same. Step 8 will save the trainer stats.json file to your colab's content folder. Step 9 will quantize your fine tune model and save it to your hugging face. Quantizing the model will allow it to perform much faster on your local machine. This code block will take about 20 minutes to complete. In the last step of the notebook, you can test some of your prompts to your custom model. A better option I can recommend for anyone with a computer with at least 8 gigabytes of RAM, and ideally 16 or more, you can test the model locally with L-M Studio. L-M Studio is completely free to use and easy to install. Inside L-M Studio, I can go to the search tab and type my hugging face username. Inside there, I can click my project's repository, locate the file with Q4 underscore K, underscore M at the end of the file, and download that model file. Once downloaded, I can go to the chat tab, click new chat, and load my custom fine-tuned model. In the system prompt tab, I will paste in the same exact system prompt that I use to fine-tune my model on. While this is not going to be a tutorial on how to use L-M Studio, just note that there's also a lot of settings for optimizing the speed of your model on your machine. Don't forget to hit the like button on this video if you learned anything new about fine-tuning. This has been AI Austin. I will see you in the next one.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card