Deploying on SageMaker
#18
by
elanmarkowitz
- opened
Trying to deploy on SageMaker with
import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
# Hub Model configuration. https://huggingface.co/models
model_id = 'mistralai/Mixtral-8x7B-Instruct-v0.1'
hub = {
'HF_MODEL_ID': model_id,
'SM_NUM_GPUS': json.dumps(8)
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
image_uri=get_huggingface_llm_image_uri("huggingface"),
transformers_version="4.36.0",
env=hub,
role=role,
name=f"HF-{model_id}".replace('/','-').replace('.','-')
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.48xlarge",
container_startup_health_check_timeout=300,
)
But get the following error
File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 159, in serve_inner model = get_model( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 291, in get_model raise ValueError("sharded is not supported for AutoModel")
ValueError: sharded is not supported for AutoModel
Any ideas on how to fix?
use this image URI
763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.1.1-tgi1.3.1-gpu-py310-cu121-ubuntu20.04-v1.0
Thanks. I also found this nice blog https://www.philschmid.de/sagemaker-deploy-mixtral#1-setup-development-environment
elanmarkowitz
changed discussion status to
closed