Edit model card


LLaVA Model Card

Model details

This is a fork from origianl liuhaotian/llava-v1.5-13b. This repo added code/inference.py and code/requirements.txt to provide customize inference script and environment for SageMaker deployment.

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.

Model date: LLaVA-v1.5-13B was trained in September 2023.

Paper or resources for more information: https://llava-vl.github.io/

How to Deploy on SageMaker

Following deploy_llava.ipynb (full tutorial here) , bundle llava model weights and code into a model.tar.gz and upload to S3:

from sagemaker.s3 import S3Uploader

# upload model.tar.gz to s3
s3_model_uri = S3Uploader.upload(local_path="./model.tar.gz", desired_s3_uri=f"s3://{sess.default_bucket()}/llava-v1.5-13b")

print(f"model uploaded to: {s3_model_uri}")

Then use HuggingfaceModel to deploy our real-time inference endpoint on SageMaker:

from sagemaker.huggingface.model import HuggingFaceModel

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
   model_data=s3_model_uri,      # path to your model and script
   role=role,                    # iam role with permissions to create an Endpoint
   transformers_version="4.28.1",  # transformers version used
   pytorch_version="2.0.0",       # pytorch version used
   py_version='py310',            # python version used
   model_server_workers=1
)

# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
    initial_instance_count=1,
    instance_type="ml.g5.xlarge",
)

Inference on SageMaker

Default conv_mode for llava-1.5 is setup as llava_v1 to process raw_prompt into meaningful prompt. You can also setup conv_mode as raw to directly use raw_prompt.

data = {
    "image" : 'https://raw.githubusercontent.com/haotian-liu/LLaVA/main/images/llava_logo.png', 
    "question" : "Describe the image and color details.",
    # "max_new_tokens" : 1024,
    # "temperature" : 0.2,
    # "conv_mode" : "llava_v1"
}
output = predictor.predict(data)
print(output)

Or use SageMakerRuntime to setup endpoint invoking client.

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Where to send questions or comments about the model: https://github.com/haotian-liu/LLaVA/issues

Intended use

Primary intended uses: The primary use of LLaVA is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

  • 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
  • 158K GPT-generated multimodal instruction-following data.
  • 450K academic-task-oriented VQA data mixture.
  • 40K ShareGPT data.

Evaluation dataset

A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.

Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.