{ "cells": [ { "cell_type": "markdown", "id": "b29a4b72-31bb-4268-9598-2cd2b6f7475e", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "# NeVA Training / Inference Tutorial\n", "\n", "### Note:\n", "Currently, this notebook must be run in a NeMo container. An example command to launch the container:\n", "\n", "```\n", "docker run --gpus all -it --rm -v :/opt/NeMo --shm-size=8g \\\n", " -p 8888:8888 --ulimit memlock=-1 --ulimit \\\n", " stack=67108864 \n", "```\n", "\n", "## Introduction\n", "\n", "This notebook illustrates how to train and perform inference using NeVA with the NeMo Toolkit. NeVA originates from [LLaVA](https://github.com/haotian-liu/LLaVA) (Large Language and Vision Assistant) and is a powerful multimodal image-text instruction tuned model optimized within the NeMo Framework. \n", "\n", "This tutorial will guide you through the following topics:\n", "1. Prepare pre-requisites for NeVA training\n", "2. Training a NeVA model\n", "3. Performing inference with the trained model\n", "\n", "## Datasets\n", "\n", "### Pre-Training Dataset\n", "\n", "The pre-training dataset is open-sourced from the LLaVA implementation and can be downloaded [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain). The dataset consists of a 558K subset of the LAION-CC-SBU dataset with BLIP captions.\n", "\n", "The associated images for pretraining can be downloaded via HuggingFace [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/blob/main/images.zip).\n", "\n", "### Instruction Tuning Dataset\n", "\n", "The instruction tuning annotations are sourced from the LLaVA implementation and are available [here](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json).\n", "\n", "The associated images for the mixture instruction tuning annotations can be found [here](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#visual-instruction-tuning). After extracting, the data should be formatted as follows:\n", "\n", "```\n", " images\n", " ├── coco\n", " │ └── train2017\n", " ├── gqa\n", " │ └── images\n", " ├── ocr_vqa\n", " │ └── images\n", " ├── textvqa\n", " │ └── train_images\n", " └── vg\n", " ├── VG_100K\n", " └── VG_100K_2\n", "```\n", "\n", "After downloading all below datasets for pretraining and instruction tuning, please put data folder at `/workspace/datasets`. Your dataset directory should look something similar to:\n", "\n", "```\n", "LLaVA-Pretrain-LCS-558K\n", "├── blip_laion_cc_sbu_558k.json\n", "├── images\n", "LLaVA-Instruct-mixture\n", "├── llava_v1_5_mix665k.json\n", "└── images\n", " └── ...\n", "```\n", "\n", "## Setting up Checkpoint and Tokenizer\n", "\n", "In this notebook, we first need to convert the Vicuna 1.5 checkpoint into the .nemo format. Meanwhile, special tokens must be incorporated into the tokenizer for NeVA training. After downloading language models from Hugging Face, ensure you also fetch the corresponding tokenizer model. Using the 7B-chat model as a reference." ] }, { "cell_type": "code", "execution_count": null, "id": "6d80adff-bd3a-40e0-9441-684328ec7596", "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "! mkdir -p /workspace/checkpoints\n", "\n", "# Download vicuna checkpoint from HF\n", "! git clone https://huggingface.co/lmsys/vicuna-7b-v1.5 /workspace/checkpoints/vicuna-7b-v1.5\n", "\n", "# Convert checkpoint\n", "! python /opt/NeMo/scripts/checkpoint_converters/convert_llama_hf_to_nemo.py \\\n", " --input_name_or_path /workspace/checkpoints/vicuna-7b-v1.5 \\\n", " --output_path /workspace/checkpoints/vicuna-7b-v1.5.nemo\n", "\n", "# Prepare tokenizer\n", "! cd /opt && git clone https://github.com/google/sentencepiece.git && \\\n", " cd sentencepiece && \\\n", " mkdir build && \\\n", " cd build && \\\n", " cmake .. && \\\n", " make && \\\n", " make install && \\\n", " ldconfig && \\\n", "cd /opt/sentencepiece/src/ && protoc --python_out=/opt/NeMo/scripts/tokenizers/ sentencepiece_model.proto && \\\n", "export PYTHONPATH=$PYTHONPATH:/opt/NeMo/scripts/tokenizers\n", "\n", "! python /opt/NeMo/scripts/tokenizers/add_special_tokens_to_sentencepiece.py \\\n", "--input_file /workspace/checkpoints/vicuna-7b-v1.5/tokenizer.model \\\n", "--output_file /workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model \\\n", "--is_userdefined \\\n", "--tokens \"\" \"\" \"\" \"\" \\\n", " \"\" \"\" \"\" \"\"\n" ] }, { "cell_type": "markdown", "id": "6b619e0a", "metadata": {}, "source": [ "## Training\n", "\n", "### Feature Alignment Pre-Training\n", "\n", "We provide a set of scripts for pre-training and fine-tuning which can be kicked off with CLI flags defining specified arguments. \n", "\n", "An example of a pre-training script execution (note the scripts will only perform 100 steps with a small micro batch size, this is not a full training):" ] }, { "cell_type": "code", "execution_count": null, "id": "3930351e", "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "! torchrun --nproc_per_node=4 /opt/NeMo/examples/multimodal/multimodal_llm/neva/neva_pretrain.py \\\n", " ++cluster_type=BCP \\\n", " trainer.precision=bf16 \\\n", " trainer.num_nodes=1 \\\n", " trainer.devices=4 \\\n", " trainer.val_check_interval=50 \\\n", " trainer.limit_val_batches=5 \\\n", " trainer.log_every_n_steps=1 \\\n", " trainer.max_steps=100 \\\n", " model.megatron_amp_O2=True \\\n", " model.micro_batch_size=1 \\\n", " model.global_batch_size=4 \\\n", " model.tensor_model_parallel_size=1 \\\n", " model.pipeline_model_parallel_size=1 \\\n", " model.mcore_gpt=True \\\n", " model.transformer_engine=True \\\n", " model.data.data_path=/workspace/datasets/LLaVA-Pretrain-LCS-558K/blip_laion_cc_sbu_558k.json \\\n", " model.data.image_folder=/workspace/datasets/LLaVA-Pretrain-LCS-558K/images \\\n", " model.tokenizer.library=sentencepiece \\\n", " model.tokenizer.model=/workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model \\\n", " model.encoder_seq_length=4096 \\\n", " model.num_layers=32 \\\n", " model.hidden_size=4096 \\\n", " model.ffn_hidden_size=11008 \\\n", " model.num_attention_heads=32 \\\n", " model.normalization=rmsnorm \\\n", " model.do_layer_norm_weight_decay=False \\\n", " model.apply_query_key_layer_scaling=True \\\n", " model.bias=False \\\n", " model.activation=fast-swiglu \\\n", " model.headscale=False \\\n", " model.position_embedding_type=rope \\\n", " model.rotary_percentage=1.0 \\\n", " model.num_query_groups=null \\\n", " model.data.num_workers=0 \\\n", " model.mm_cfg.llm.from_pretrained=/workspace/checkpoints/vicuna-7b-v1.5.nemo \\\n", " model.mm_cfg.llm.model_type=v1 \\\n", " model.data.conv_template=v1 \\\n", " model.mm_cfg.vision_encoder.from_pretrained='openai/clip-vit-large-patch14' \\\n", " model.mm_cfg.vision_encoder.from_hf=True \\\n", " model.optim.name=\"fused_adam\" \\\n", " exp_manager.create_checkpoint_callback=True \\\n", " exp_manager.checkpoint_callback_params.save_nemo_on_train_end=True \\\n", " exp_manager.create_wandb_logger=False" ] }, { "cell_type": "markdown", "id": "f24ee70d-3025-47f6-8571-295b024c3e05", "metadata": {}, "source": [ "**Note**: To initialize training a model from scratch rather than from a pretrained checkpoint, you may specify `null` instead of a path in the CLI arguments.\n", "\n", "### Image-Language Pair Instruction Fine-Tuning\n", "\n", "Fine-tuning can also be run from within the container via a similar command leveraging the `neva_finetune.py` script. We leverage the checkpoint saved from pretrain step to further finetune it, given by `model.restore_from_path=/workspace/nemo_experiments/nemo_neva/checkpoints/nemo_neva.nemo`.\n", "\n", "An example of an image-text pair instruction tuning script execution (note the scripts will only perform 1000 steps with a small micro batch size, this is not a full training):" ] }, { "cell_type": "code", "execution_count": null, "id": "97963224", "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "! torchrun --nproc_per_node=4 /opt/NeMo/examples/multimodal/multimodal_llm/neva/neva_finetune.py \\\n", "++cluster_type=BCP \\\n", " trainer.precision=bf16 \\\n", " trainer.num_nodes=1 \\\n", " trainer.devices=4 \\\n", " trainer.val_check_interval=50 \\\n", " trainer.limit_val_batches=50 \\\n", " trainer.max_steps=100 \\\n", " model.restore_from_path=/workspace/nemo_experiments/nemo_neva/checkpoints/nemo_neva.nemo \\\n", " model.megatron_amp_O2=True \\\n", " model.micro_batch_size=1 \\\n", " model.global_batch_size=2 \\\n", " model.tensor_model_parallel_size=4 \\\n", " model.pipeline_model_parallel_size=1 \\\n", " model.mcore_gpt=True \\\n", " model.transformer_engine=True \\\n", " model.data.data_path=/workspace/datasets/LLaVA-Instruct-mixture/llava_v1_5_mix665k.json \\\n", " model.data.image_folder=/workspace/datasets/LLaVA-Instruct-mixture/images \\\n", " model.tokenizer.library=sentencepiece \\\n", " model.tokenizer.model=/workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model \\\n", " model.encoder_seq_length=4096 \\\n", " model.num_layers=32 \\\n", " model.hidden_size=4096 \\\n", " model.ffn_hidden_size=11008 \\\n", " model.num_attention_heads=32 \\\n", " model.normalization=rmsnorm \\\n", " model.do_layer_norm_weight_decay=False \\\n", " model.apply_query_key_layer_scaling=True \\\n", " model.bias=False \\\n", " model.activation=fast-swiglu \\\n", " model.headscale=False \\\n", " model.position_embedding_type=rope \\\n", " model.rotary_percentage=1.0 \\\n", " model.num_query_groups=null \\\n", " model.data.num_workers=0 \\\n", " model.mm_cfg.llm.from_pretrained=/workspace/checkpoints/vicuna-7b-v1.5.nemo \\\n", " model.mm_cfg.llm.model_type=v1 \\\n", " model.data.conv_template=v1 \\\n", " model.mm_cfg.vision_encoder.from_pretrained='openai/clip-vit-large-patch14' \\\n", " model.mm_cfg.vision_encoder.from_hf=True \\\n", " exp_manager.create_checkpoint_callback=True \\\n", " exp_manager.checkpoint_callback_params.save_nemo_on_train_end=True \\\n", " exp_manager.name=\"nemo_neva_finetune\" \\\n", " model.optim.name=\"fused_adam\"" ] }, { "cell_type": "markdown", "id": "d69e937c", "metadata": {}, "source": [ "## Inference\n", "\n", "### From Pre-trained Checkpoints\n", "\n", "If you would like to use NeVA for inference from pre-trained checkpoint via HuggingFace, you can use the checkpoint from fine-tune step or convert from HuggingFace to `.nemo` first. Since we didn't finish full training in this tutorial with NeMo. We will instruct how you can convert a checkpoint from Hugging Face." ] }, { "cell_type": "code", "execution_count": null, "id": "5f398c26", "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "! python3 /opt/NeMo/scripts/checkpoint_converters/convert_llava_hf_to_nemo.py \\\n", " --input_name_or_path llava-hf/llava-1.5-7b-hf \\\n", " --output_path /workspace/checkpoints/llava-7b.nemo \\\n", " --tokenizer_path /workspace/checkpoints/vicuna-7b-v1.5/tokenizer_neva.model" ] }, { "cell_type": "markdown", "id": "5235639a", "metadata": {}, "source": [ "### Running Inference\n", "\n", "NeVA inference via the NeMo Framework can be quickly spun up via the NeMo Launcher and a few modifications to use the default NeVA inference config file.\n", "\n", "Inference can be run with a similar command leveraging the provided inference script `neva_evaluation.py` within the container.\n", "\n", "An example of an inference script execution:" ] }, { "cell_type": "code", "execution_count": null, "id": "ee0156ea", "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "! echo '{\"image\": \"RTX4080.png\", \"prompt\": \"\\nCan you describe this image?\"}' > sample.jsonl\n", "! mkdir images && wget https://assets.nvidia.partners/images/png/TUF_Gaming_GeForce_RTX_4080_SUPER_OC_edition_packaging_with_card__12419.png --output-document=images/RTX4080.png\n", "! torchrun --nproc_per_node=1 /opt/NeMo/examples/multimodal/multimodal_llm/neva/neva_evaluation.py \\\n", "tensor_model_parallel_size=1 \\\n", "pipeline_model_parallel_size=1 \\\n", "neva_model_file=/workspace/checkpoints/llava-7b.nemo \\\n", "trainer.devices=1 \\\n", "trainer.precision=bf16 \\\n", "prompt_file=sample.jsonl \\\n", "inference.media_base_path=images \\\n", "output_file=output.jsonl \\\n", "inference.temperature=0.2 \\\n", "inference.tokens_to_generate=256" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6" } }, "nbformat": 4, "nbformat_minor": 5 }