Back to website
Training Cluster
Hugging Face presents
Training Cluster
As a service
Train your LLM at scale on our infrastructure
I want to train a
parameters
model on a
dataset, on
- Estimate: –/–
- $ –
How does it work?
Training your large model on the Hugging Face Accelerator Cluster
You provide the dataset – or we work together to create it – and the single node training parameters.
model, optimizer, data = accelerator.prepare(model, optimizer, data)
We run the training for you and scale it to thousands of Accelerators.
Training complete
Train your own foundation model
A model that is optimized for your specific domain and business needs. You’re free to use it however you want.
Keep control of your data
We don’t store the training data and you get access to the whole training output, logs and checkpoints.
Infra Expert support
We have experience with large-scale training, having contributed to LLMs like BLOOM, StarCoder, and more.
Get started