Error during deploying llava-hf/llava-1.5-7b-hf using Amazon SageMaker

#25
by dariahhlibova - opened

Hello,

I used the code snippet that is provided in example to deploy the model through AWS Sagemaker, but when I do inference (output = predictor.predict(data)) I have error:

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "\u0027llava\u0027"
}

The code snippet for model deployment:

import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel

try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

Hub Model configuration. https://huggingface.co/models

hub = {
'HF_MODEL_ID':'llava-hf/llava-1.5-7b-hf',
'HF_TASK':'image-to-text'
}

create Hugging Face Model Class

huggingface_model = HuggingFaceModel(
transformers_version='4.37.0',
pytorch_version='2.1.0',
py_version='py310',
env=hub,
role=role,
)

deploy model to SageMaker Inference

predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g5.xlarge' # ec2 instance type
)

Sign up or log in to comment