inference issues on SageMaker studion

#11
by raafat1983 - opened

I managed to deploy Yi-VL-34B on instance_type='ml.p3.16xlarge' however , when I run
[predictor.predict({
"inputs": "Can you please let us know more details about your"
})]

I go this error
[---------------------------------------------------------------------------
ModelError Traceback (most recent call last)
Cell In[15], line 1
----> 1 predictor.predict({
2 "inputs": "Can you please let us know more details about your"
3 })

File /opt/conda/lib/python3.10/site-packages/sagemaker/base_predictor.py:206, in Predictor.predict(self, data, initial_args, target_model, target_variant, inference_id, custom_attributes, component_name)
203 if inference_component_name:
204 request_args["InferenceComponentName"] = inference_component_name
--> 206 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
207 return self._handle_response(response)

File /opt/conda/lib/python3.10/site-packages/botocore/client.py:553, in ClientCreator._create_api_method.._api_call(self, *args, **kwargs)
549 raise TypeError(
550 f"{py_operation_name}() only accepts keyword arguments."
551 )
552 # The "self" in this scope is referring to the BaseClient.
--> 553 return self._make_api_call(operation_name, kwargs)

File /opt/conda/lib/python3.10/site-packages/botocore/client.py:1009, in BaseClient._make_api_call(self, operation_name, api_params)
1005 error_code = error_info.get("QueryErrorCode") or error_info.get(
1006 "Code"
1007 )
1008 error_class = self.exceptions.from_code(error_code)
-> 1009 raise error_class(parsed_response, operation_name)
1010 else:
1011 return parsed_response

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "\u0027llava\u0027"
}
". See https://eu-west-1.console.aws.amazon.com/cloudwatch/home?region=eu-west-1#logEventViewer:group=/aws/sagemaker/Endpoints/huggingface-pytorch-inference-2024-01-27-23-05-47-532 in account XXXXX for more information.]

Any advice ?

01-ai org

sorry, I am not familiar with SageMaker study, but you can try LMDeploy to deploy our model for inference

Sign up or log in to comment