Should Deploy Happen o̶n̶ from a GPU?

#3
by MetaSkills - opened

This might showcase my ignorance on how much the SageMaker tools are doing with the Deep Learning Containers. But thought I would ask.

AnyModality org

@MetaSkills Yeah, the deployment happens on a GPU when GPU instance is specified. The SageMaker gives users pre-built Pytorch Deep Learning Containers and we customize the container with some package defined in requirements.txt, and use inference.py to define the endpoint interface.
Is that your question?

Thanks, but not really.

Was more or less thinking about the machine running the deployment. I never spun up SageMaker to play with the Notebook because I could never get LFS working with any SageMaker setup. Yum not working, etc, etc.

So, I ended up pulling apart the notebook into little bash scripts and running them from a Python Devcontainer locally, in my M2 Mac. And everything works. But the whole python code for the SageMaker deploy is still a mystery to me in how the final DLC is built and where that happens. My guess is remotely someplace in the pipeline. Hence the question. Totally something I can keep digging and learn myself too.

MetaSkills changed discussion title from Should Deploy Happen on a GPU? to Should Deploy Happen o̶n̶ from a GPU?

Sign up or log in to comment