How to send inference request to deployed end point

#11
by AkiraKuniyoshi - opened

Hello, I deployed the model in a serverless Sagemaker end point. However, I can't find the documentation on what should i send and in which format to the model so i can get an inference. Which data structure? and does it make the pre processing in the deployed model? or do i need still to load the processors and pre process the image and tex?

Help will be much appreciated. Thank you!

Sign up or log in to comment