File size: 7,466 Bytes
76d9c4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 |
---
title: Llama 2 Fine-tuning
description: Fine-tune Llama2-7B with instruction datasets
icon: "circle-3"
version: EN
---
This example fine-tunes Llama2-7B with a code instruction dataset, illustrating how VESSL AI offloads the infrastructural challenges of large-scale AI workloads and help you train multi-billion-parameter models in hours, not weeks.
This is the most compute-intensive workload yet but you will see how VESSL AI's efficient training stack enables you to seamlessly scale and execute multi-node training. For a more in-depth guide, refer to our [blog post](https://blog.vessl.ai/ai-infrastructure-llm).
<CardGroup cols={2}>
<Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/ssd-1b-inference">
Try out the Quickstart example with a single click on VESSL Hub.
</Card>
<Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/tree/main/SSD-1B">
See the completed YAML file and final code for this example.
</Card>
</CardGroup>
## What you will do
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/llama2-title.png"
/>
- Fine-tune an LLM with zero-to-minimum setup
- Mount a custom dataset
- Store and export model artifacts
## Writing the YAML
Let's fill in the `llama2_fine-tuning.yml` file.
<Steps titleSize="h3">
<Step title="Spin up a training job">
Let's set spin up an instance. Nothing new here.
```yaml
name: Llama2-7B fine-tuning
description: Fine-tune Llama2-7B with instruction datasets
resources:
cluster: vessl-gcp-oregon
preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
```
</Step>
<Step title="Mount the code, modal, and dataset">
Here, in addition to our GitHub repo and Hugging Face model, we are also mounting a Hugging Face dataset.
As with our HF model, mountint data is as simple as referencing the URL beginnging with the `hf://` scheme -- this goes the same for other cloud storages as well, `s3://` for Amazon S3 for example.
```yaml
name: llama2-finetuning
description: Fine-tune Llama2-7B with instruction datasetst
resources:
cluster: vessl-gcp-oregon
preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
import:
/model/: hf://huggingface.co/VESSL/llama2
/code/:
git:
url: https://github.com/vessl-ai/hub-model
ref: main
/dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca
```
</Step>
<Step title="Write the run commands">
Now that we have the three pillars of model development mounted on our remote workload, we are ready to define the run command. Let's install additiona Python dependencies and run `finetuning.py` -- which calls for our HF model and datasets in the `config.yaml` file.
```yaml
name: llama2-finetuning
description: Fine-tune Llama2-7B with instruction datasetst
resources:
cluster: vessl-gcp-oregon
preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
import:
/model/: hf://huggingface.co/VESSL/llama2
/code/:
git:
url: https://github.com/vessl-ai/hub-model
ref: main
/dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca
run:
- command: |-
pip install -r requirements.txt
python finetuning.py
workdir: /code/llama2-finetuning
```
</Step>
<Step title="Export a model artifact">
You can keep track of model checkpoints by dedicating an `export` volume to the workload. After training is finished, trained models are uploaded to the `artifact` folder as model checkpoints.
```yaml
name: llama2-finetuning
description: Fine-tune Llama2-7B with instruction datasetst
resources:
cluster: vessl-gcp-oregon
preset: gpu-l4-small
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3
import:
/model/: hf://huggingface.co/VESSL/llama2
/code/:
git:
url: https://github.com/vessl-ai/hub-model
ref: main
/dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca
run:
- command: |-
pip install -r requirements.txt
python finetuning.py
workdir: /code/llama2-finetuning
export:
/artifacts/: vessl-artifact://
```
</Step>
</Steps>
## Running the workload
Once the workload is completed, you can follow the link in the terminal to get the output files including the model checkpoints under Files.
```
vessl run create -f llama2_fine-tuning.yml
```
<img
style={{ borderRadius: '0.5rem' }}
src="/images/get-started/llama2-artifacts.jpeg"
/>
## Behind the scenes
With VESSL AI, you can launch a full-scale LLM fine-tuning workload on any cloud, at any scale, without worrying about these underlying system backends.
* **Model checkpointing** — VESSL AI stores .pt files to mounted volumes or model registry and ensures seamless checkpointing of fine-tuning progress.
* **GPU failovers** — VESSL AI can autonomously detect GPU failures, recover failed containers, and automatically re-assign workload to other GPUs.
* **Spot instances** — Spot instance on VESSL AI works with model checkpointing and export volumes, saving and resuming the progress of interrupted workloads safely.
* **Distributed training** — VESSL AI comes with native support for PyTorch `DistributedDataParallel` and simplifies the process for setting up multi-cluster, multi-node distributed training.
* **Autoscaling** — As more GPUs are released from other tasks, you can dedicate more GPUs to fine-tuning workloads. You can do this on VESSL AI by adding the following to your existing fine-tuning YAML.
## Tips & tricks
In addition to the model checkpoints, you can track key metrics and parameters with `vessl.log` Python SDK. Here's a snippet from [finetuning.py](https://github.com/vessl-ai/hub-model/blob/a74e87564d0775482fe6c56ff811bd8a9821f809/llama2-finetuning/finetuning.py#L97-L109).
```python
class VesslLogCallback(TrainerCallback):
def on_log(self, args, state, control, logs=None, **kwargs):
if "eval_loss" in logs.keys():
payload = {
"eval_loss": logs["eval_loss"],
}
vessl.log(step=state.global_step, payload=payload)
elif "loss" in logs.keys():
payload = {
"train_loss": logs["loss"],
"learning_rate": logs["learning_rate"],
}
vessl.log(step=state.global_step, payload=payload)
```
## Using our web interface
You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run.
<iframe
src="https://scribehow.com/embed/Llama_2_Fine-tuning_with_VESSL_AI__3UJWTqUgTguq1vYrjNu9MA?skipIntro=true&removeLogo=true"
width="100%" height="640" allowfullscreen frameborder="0"
style={{ borderRadius: '0.5rem' }} >
</iframe>
## What's next?
We shared ho you can use VESSL AI to go from a simple Python container to a full-scale AI workload. We hope these guides give you a glimpse of what you can achieve with VESSL AI. For more resources, follow along our example models or use casese.
<CardGroup cols={2}>
<Card icon="wand" title="Explore more models" href="https://vessl.ai/hub">
See VESSL AI in action with the latest open-source models and our example Runs.
</Card>
<Card icon="rectangles-mixed" title="Explore more use casees" href="use-cases/">
See the top use casese of VESSL AI from experiment tracking to cluster management.
</Card>
</CardGroup>
|