Anyone able to deploy an inference endpoint on sagemaker?
Unable to deploy an inference endpoint on sagemaker with suggested script
Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'Qwen/Qwen2-VL-7B-Instruct',
'SM_NUM_GPUS': json.dumps(1)
}
create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
image_uri=get_huggingface_llm_image_uri("huggingface",version="2.3.1"),
env=hub,
role=role,
)
deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.8xlarge",
container_startup_health_check_timeout=300,
)
not able to deploy,it is throwing qwen2-vl not supported
Hey hey,
here is a command deployment that works:
image_uri = "763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.4.0-tgi3.0.1-gpu-py311-cu124-ubuntu22.04"
model_name = "qwen2-vl-7b-instruct" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
hub = {
'HF_MODEL_ID': 'Qwen/Qwen2-VL-7B-Instruct',
'SM_NUM_GPUS': json.dumps(1),
'MESSAGES_API_ENABLED': "true",
'CUDA_GRAPHS': json.dumps(0),
}
model = HuggingFaceModel(
name=model_name,
env=hub,
role=role,
image_uri=image_uri
)
predictor = model.deploy(
initial_instance_count=1,
instance_type="ml.g5.2xlarge",
endpoint_name=model_name
)
input_data = {
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": [
{"type": "text", "text": "Whats in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
},
},
],
}
]
}
predictor.predict(input_data)
A few comments:
- it uses the latest TGI image available in AWS, it will soon be updated and accessible from the Sagemaker SDK
- There is an issue opened here : https://github.com/huggingface/text-generation-inference/issues/2823. As a workaround I'm specifying CUDA_GRAPHS=0
Hope it helps!
cheers
Hi pagezyhf,will above code work for "OS-Copilot/OS-Atlas-Base-7B" also?????
can i create or have my own image uri.
"OS-Copilot/OS-Atlas-Base-7B" this model requires the flag to run remote code (trust_remote_code : "true") but appart from that, it should work. Give it a try and let me know!
"Any possibility this would also work for Qwen2-VL-72B instruct on a larger instance?"
Yes it should work with a larger instance!