Calling predictor.predict() on deployed model via SageMaker

#14
by Callam - opened

I am trying to call

hub = {
'HF_MODEL_ID':"Salesforce/blip-image-captioning-base", # model_id from hf.co/models
'HF_TASK':"image-to-text" # NLP task you want to use for predictions
}

create Hugging Face Model Class

huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # IAM role with permissions to create an endpoint
transformers_version="4.26", # Transformers version used
pytorch_version="1.13", # PyTorch version used
py_version='py39', # Python version used
)

deploy model to SageMaker Inference

predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge"
)

data = {
"inputs": {
"img_url" : 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
}
}

request

predictor.predict(data)

----------------------------------------------------------------------------ERROR Message---------------------------------------------------------------------------
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from the primary with the message "{
"code": 400,
"type": "InternalServerException",
"message": "Incorrect format used for image. Should be an url linking to an image, a local path, or a PIL image."
}

Does anyone know why or how to structure the .predict() request? And how can I figure this out for other models in the future?

@Callam Did u fix it?

I deployed in AWS sagemaker, alos have the same problem. Did u fx it?

I deployed in AWS sagemaker, alos have the same problem. Did u fx it?

Yes I did:

https://github.com/DSCO-Co/SageMaker-image-to-text

Sign up or log in to comment