|
--- |
|
title: Overview |
|
description: Launch and scale AI workloads without the hassle of managing infrastructure |
|
icon: "hand-wave" |
|
version: EN |
|
--- |
|
|
|
## VESSL AI -- Purpose-built cloud for AI |
|
|
|
VESSL AI provides a unified interface for training and deploying AI models on the cloud. Simply define your GPU resource and pinpoint to your code & dataset. VESSL AI does the orchestration & heavy lifting for you: |
|
1. Create a GPU-accelerated container with the right Docker Image. |
|
2. Mount your code and dataset from GitHub, Hugging Face, Amazon S3, and more. |
|
3. Launcs the workload on our fully managed GPU cloud. |
|
|
|
<CardGroup cols={2}> |
|
<Card title="One any cloud, at any scale" href="#"> |
|
Instantly scale workloads across multiple clouds. |
|
<br/> |
|
<img className="rounded-md" src="/images/get-started/overview-cloud.png" /> |
|
</Card> |
|
|
|
<Card title="Streamlined interface" href="#"> |
|
Launch any AI workloads with a unified YAML definition. |
|
<br/> |
|
<img className="rounded-md" src="/images/get-started/overview-yaml.png" /> |
|
</Card> |
|
|
|
<Card title="End-to-end coverage" href="#"> |
|
A single platform for fine-tuning to deployment. |
|
<br/> |
|
<img className="rounded-md" src="/images/get-started/overview-pipeline.png" /> |
|
</Card> |
|
|
|
<Card title="A centralized compute platform" href="#"> |
|
Optimize GPU usage and save up to 80% in cloud. |
|
<br/> |
|
<img className="rounded-md" src="/images/get-started/overview-gpu.png" /> |
|
</Card> |
|
</CardGroup> |
|
|
|
## What can you do? |
|
|
|
- Run compute-intensive AI workloads remotely within seconds. |
|
- Fine-tune LLMs with distributed training and auto-failover with zero-to-minimum setup. |
|
- Scale training and inference workloads horizontally. |
|
- Deploy an interactive web applicaiton for your model. |
|
- Serve your AI models as web endpoints. |
|
|
|
## How to get started |
|
|
|
Head over to [vessl.ai](https://vessl.ai) and sign up for a free account. No `docker build` or `kubectl get`. |
|
1. Create your account at [vessl.ai](https://vessl.ai) and get $30 in free GPU credits. |
|
2. Install our Python package — `pip install vessl`. |
|
3. Follow our [Quickstart](/get-started/quickstart) guide or try out our example models at [VESSL Hub](https://vessl.ai/hub). |
|
|
|
## How does it work? |
|
|
|
VESSL AI abstracts the obscure infrastructure and complex backends inherent to launching AI workloads into a simple YAML file, so you don't have to mess with AWS, Kubernetes, Docker, or more. Here's an example that launches a web app for Stable Diffusion. |
|
|
|
```yaml |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: gpu-l4-small |
|
image: quay.io/vessl-ai/torch:2.1.0-cuda12.2-r3 |
|
import: |
|
/code/: |
|
git: |
|
url: https://github.com/vessl-ai/hub-model |
|
ref: main |
|
/model/: hf://huggingface.co/VESSL/SSD-1B |
|
run: |
|
- command: |- |
|
pip install -r requirements.txt |
|
streamlit run ssd_1b_streamlit.py --server.port=80 |
|
workdir: /code/SSD-1B |
|
interactive: |
|
max_runtime: 24h |
|
jupyter: |
|
idle_timeout: 120m |
|
ports: |
|
- name: streamlit |
|
type: http |
|
port: 80 |
|
|
|
``` |
|
|
|
With every YAML file, you are creating a VESSL Run. VESSL Run is an atomic unit of VESSL AI, a single unit of Kubernetes-backed AI workload. You can use our YAML definition as you progress throughout the AI lifecycle from checkpointing models for fine-tuning to exposing ports for inference. |
|
|
|
## What's next? |
|
|
|
See VESSL AI in action with our examples Runs and pre-configured open-source models. |
|
|
|
<CardGroup cols={2}> |
|
<Card title="Quickstart – Hello, world!" href="get-started/quickstart"> |
|
Fine-tune Llama2-7B with a code instructions dataset. |
|
</Card> |
|
|
|
<Card title="GPU-accelerated notebook" href="get-started/gpu-notebook"> |
|
Launch a GPU-accelerated Streamlit app of Mistral 7B. |
|
</Card> |
|
|
|
<Card title="SSD-1B Playground" href="get-started/stable-diffusion"> |
|
Interactive playground of a lighter and faster version for Stable Diffusion XL. |
|
</Card> |
|
|
|
<Card title="Llama2-7B Fine-tuning" href="get-started/llama2"> |
|
Translate audio snippets into text on a Streamlit playground. |
|
</Card> |
|
</CardGroup> |