Inference with SageMaker predictor

#11
by michalkowalczuk - opened

Hi all :)

Could anybody help me figure out what data I need to pass to the predictor after deploying the model with SageMaker.

When (code similar to) this completes:

import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel

try:
    role = sagemaker.get_execution_role()
except ValueError:
    iam = boto3.client('iam')
    role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']

# Hub Model configuration. https://huggingface.co/models
hub = {
    'HF_MODEL_ID':'patrickjohncyh/fashion-clip',
    'HF_TASK':'zero-shot-image-classification'
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
    transformers_version='4.26.0',
    pytorch_version='1.13.1',
    py_version='py39',
    env=hub,
    role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
    initial_instance_count=1, # number of instances
    instance_type='ml.m5.xlarge' # ec2 instance type
)

What data needs to be passed to the predictor to get the result:

data = {
    "inputs": {
        # what goes here?
    }
}

predictor.predict(data)

Thank you very much,
Michal

Hey Michal, did you sort out what the data model is?

Sign up or log in to comment