File size: 1,368 Bytes
76d9c4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
title: Overview
version: EN
---
<img style={{ borderRadius: '0.5rem' }}
src="/images/clusters/overview/1_clusters.png"
/>
VESSL enables seamless scaling of containerized ML workloads from a personal laptop to cloud instances or Kubernetes-backed on-premise clusters.  
While VESSL comes with an out-of-the-box, fully managed AWS, you can also integrate **an unlimited number** of (1) personal Linux machines, (2) on-premise GPU servers, and (3) private clouds. You can then use VESSL as a single point of access to multiple clusters. 
VESSL Clusters simplifies the end-to-end management of large-scale, organization-wide ML infrastructure from integration to monitoring. These features are available under **ποΈ Clusters**.
* **Single-command Integration** β Set up a hybrid or multi-cloud infrastructure with a single command.
* **GPU-accelerated workloads** β Run training, optimization, and inference tasks on GPUs in seconds
* **Resource optimization** β Match and scale workloads automatically based on the required compute resources
* **Cluster Dashboard** β Monitor real-time usage and incident & health status of clusters down to each node.
* **Reproducibility** β Record runtime metadata such as hardware and instance specifications.
<img style={{ borderRadius: '0.5rem' }}
src="/images/clusters/overview/2_figure.png"
/> |