Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
chansung 
posted an update Jan 12
Post
As a GPU poor, I found a nice open source project.

dstackai is the perfect open source project for GPU poor. Simply specify resource requirements (GPU RAM, spot, ...), then will suggest the cheapest options among the popular GPU cloud providers (AWS, GCP, Azure, Lambda Labs, TensorDock, and vast.ai)

Provision VM instances in 3 different use cases. These are the essential for any ML projects.
- Dev: connect provisioned VM instance to your fav IDE (including Jupyter)
- Task: run experiments (training, fine-tuning, ...) via SSH
- Service: run your model in production via HTTPS

dstack is 100% open source, but you need to have your own accounts for each GPU cloud provider, enough GPU quota, configure credentials, etc., all by yourself. Luckily, dstack will announce dstack cloud which let you not worried about all the hassles. The price is almost as same as you directly connect to each cloud with your account.

The attached code snippet shows you how to provision a Mistral-7B model in Text Generation Inference(TGI) on the cheapest VM instance (of having 24GB of VRAM). Then you get the HTTPS connection to it, and play with it as usual with TGI client library as attached in the second code snippet.

If you want to learn more about dstack, check out the official website. Without GPU sponsors, as an individual open source contributor in ML, this kind of project is pretty important.
: https://dstack.ai/

If you are looking for an alternative, there is SkyPilot project as well
: https://github.com/skypilot-org/skypilot

Slick Let's do a project on diffusion models using the cheapest option possible. But we can also show if it can provide the highest efficiency. What say?

·

like in terms of different choices in hardware specs, cloud providers, and region, or something?