|
--- |
|
title: Llama 2 Fine-tuning |
|
description: Fine-tune Llama2-7B with instruction datasets |
|
icon: "circle-3" |
|
version: EN |
|
--- |
|
|
|
This example fine-tunes Llama2-7B with a code instruction dataset, illustrating how VESSL AI offloads the infrastructural challenges of large-scale AI workloads and help you train multi-billion-parameter models in hours, not weeks. |
|
|
|
This is the most compute-intensive workload yet but you will see how VESSL AI's efficient training stack enables you to seamlessly scale and execute multi-node training. For a more in-depth guide, refer to our [blog post](https://blog.vessl.ai/ai-infrastructure-llm). |
|
|
|
<CardGroup cols={2}> |
|
<Card icon="sparkles" title="Try it on VESSL Hub" href="https://vessl.ai/hub/ssd-1b-inference"> |
|
Try out the Quickstart example with a single click on VESSL Hub. |
|
</Card> |
|
|
|
<Card icon="github" title="See the final code" href="https://github.com/vessl-ai/hub-model/tree/main/SSD-1B"> |
|
See the completed YAML file and final code for this example. |
|
</Card> |
|
</CardGroup> |
|
|
|
## What you will do |
|
|
|
<img |
|
style={{ borderRadius: '0.5rem' }} |
|
src="/images/get-started/llama2-title.png" |
|
/> |
|
|
|
- Fine-tune an LLM with zero-to-minimum setup |
|
- Mount a custom dataset |
|
- Store and export model artifacts |
|
|
|
## Writing the YAML |
|
|
|
Let's fill in the `llama2_fine-tuning.yml` file. |
|
|
|
<Steps titleSize="h3"> |
|
<Step title="Spin up a training job"> |
|
Let's set spin up an instance. Nothing new here. |
|
|
|
```yaml |
|
name: Llama2-7B fine-tuning |
|
description: Fine-tune Llama2-7B with instruction datasets |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: gpu-l4-small |
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
``` |
|
</Step> |
|
|
|
<Step title="Mount the code, modal, and dataset"> |
|
Here, in addition to our GitHub repo and Hugging Face model, we are also mounting a Hugging Face dataset. |
|
|
|
As with our HF model, mountint data is as simple as referencing the URL beginnging with the `hf://` scheme -- this goes the same for other cloud storages as well, `s3://` for Amazon S3 for example. |
|
|
|
```yaml |
|
name: llama2-finetuning |
|
description: Fine-tune Llama2-7B with instruction datasetst |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: gpu-l4-small |
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
import: |
|
/model/: hf://huggingface.co/VESSL/llama2 |
|
/code/: |
|
git: |
|
url: https://github.com/vessl-ai/hub-model |
|
ref: main |
|
/dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca |
|
``` |
|
</Step> |
|
|
|
<Step title="Write the run commands"> |
|
Now that we have the three pillars of model development mounted on our remote workload, we are ready to define the run command. Let's install additiona Python dependencies and run `finetuning.py` -- which calls for our HF model and datasets in the `config.yaml` file. |
|
|
|
```yaml |
|
name: llama2-finetuning |
|
description: Fine-tune Llama2-7B with instruction datasetst |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: gpu-l4-small |
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
import: |
|
/model/: hf://huggingface.co/VESSL/llama2 |
|
/code/: |
|
git: |
|
url: https://github.com/vessl-ai/hub-model |
|
ref: main |
|
/dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca |
|
run: |
|
- command: |- |
|
pip install -r requirements.txt |
|
python finetuning.py |
|
workdir: /code/llama2-finetuning |
|
``` |
|
</Step> |
|
|
|
<Step title="Export a model artifact"> |
|
You can keep track of model checkpoints by dedicating an `export` volume to the workload. After training is finished, trained models are uploaded to the `artifact` folder as model checkpoints. |
|
|
|
```yaml |
|
name: llama2-finetuning |
|
description: Fine-tune Llama2-7B with instruction datasetst |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: gpu-l4-small |
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
import: |
|
/model/: hf://huggingface.co/VESSL/llama2 |
|
/code/: |
|
git: |
|
url: https://github.com/vessl-ai/hub-model |
|
ref: main |
|
/dataset/: hf://huggingface.co/datasets/VESSL/code_instructions_small_alpaca |
|
run: |
|
- command: |- |
|
pip install -r requirements.txt |
|
python finetuning.py |
|
workdir: /code/llama2-finetuning |
|
export: |
|
/artifacts/: vessl-artifact:// |
|
``` |
|
</Step> |
|
</Steps> |
|
|
|
## Running the workload |
|
|
|
Once the workload is completed, you can follow the link in the terminal to get the output files including the model checkpoints under Files. |
|
|
|
``` |
|
vessl run create -f llama2_fine-tuning.yml |
|
``` |
|
|
|
<img |
|
style={{ borderRadius: '0.5rem' }} |
|
src="/images/get-started/llama2-artifacts.jpeg" |
|
/> |
|
|
|
## Behind the scenes |
|
|
|
With VESSL AI, you can launch a full-scale LLM fine-tuning workload on any cloud, at any scale, without worrying about these underlying system backends. |
|
|
|
* **Model checkpointing** β VESSL AI stores .pt files to mounted volumes or model registry and ensures seamless checkpointing of fine-tuning progress. |
|
* **GPU failovers** β VESSL AI can autonomously detect GPU failures, recover failed containers, and automatically re-assign workload to other GPUs. |
|
* **Spot instances** β Spot instance on VESSL AI works with model checkpointing and export volumes, saving and resuming the progress of interrupted workloads safely. |
|
* **Distributed training** β VESSL AI comes with native support for PyTorch `DistributedDataParallel` and simplifies the process for setting up multi-cluster, multi-node distributed training. |
|
* **Autoscaling** β As more GPUs are released from other tasks, you can dedicate more GPUs to fine-tuning workloads. You can do this on VESSL AI by adding the following to your existing fine-tuning YAML. |
|
|
|
## Tips & tricks |
|
|
|
In addition to the model checkpoints, you can track key metrics and parameters with `vessl.log` Python SDK. Here's a snippet from [finetuning.py](https://github.com/vessl-ai/hub-model/blob/a74e87564d0775482fe6c56ff811bd8a9821f809/llama2-finetuning/finetuning.py#L97-L109). |
|
|
|
```python |
|
class VesslLogCallback(TrainerCallback): |
|
def on_log(self, args, state, control, logs=None, **kwargs): |
|
if "eval_loss" in logs.keys(): |
|
payload = { |
|
"eval_loss": logs["eval_loss"], |
|
} |
|
vessl.log(step=state.global_step, payload=payload) |
|
elif "loss" in logs.keys(): |
|
payload = { |
|
"train_loss": logs["loss"], |
|
"learning_rate": logs["learning_rate"], |
|
} |
|
vessl.log(step=state.global_step, payload=payload) |
|
``` |
|
|
|
## Using our web interface |
|
|
|
You can repeat the same process on the web. Head over to your [Organization](https://vessl.ai), select a project, and create a New run. |
|
|
|
<iframe |
|
src="https://scribehow.com/embed/Llama_2_Fine-tuning_with_VESSL_AI__3UJWTqUgTguq1vYrjNu9MA?skipIntro=true&removeLogo=true" |
|
width="100%" height="640" allowfullscreen frameborder="0" |
|
style={{ borderRadius: '0.5rem' }} > |
|
</iframe> |
|
|
|
## What's next? |
|
|
|
We shared ho you can use VESSL AI to go from a simple Python container to a full-scale AI workload. We hope these guides give you a glimpse of what you can achieve with VESSL AI. For more resources, follow along our example models or use casese. |
|
|
|
<CardGroup cols={2}> |
|
<Card icon="wand" title="Explore more models" href="https://vessl.ai/hub"> |
|
See VESSL AI in action with the latest open-source models and our example Runs. |
|
</Card> |
|
|
|
<Card icon="rectangles-mixed" title="Explore more use casees" href="use-cases/"> |
|
See the top use casese of VESSL AI from experiment tracking to cluster management. |
|
</Card> |
|
</CardGroup> |
|
|