|
--- |
|
title: Spin-up a notebook server on GPUs |
|
description: Enable a real-time session of interacitve run on GPUs. |
|
version: EN |
|
--- |
|
|
|
## Interactive run |
|
Interactive runs are designed to using Jupyter or SSH for live interaction with your data, code, and GPUs. Interactive runs are useful for tasks such as data exploration, model debugging, and algorithm development. They also allow you to expose additional ports on the container and communicate through those ports. |
|
|
|
### A Simple Interactive Run |
|
Here is an example of a simple interactive run. It specifies resources, a container image, and the duration of the interactive runtime. It's important to note that by default, port `22/tcp` is exposed for SSH and `8888/http` is exposed for JupyterLab. |
|
|
|
```yaml Simple interactive run definition |
|
name: gpu-interactive-run |
|
description: Run an interactive GPU-backed Jupyter and SSH server. |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: v1.l4-1.mem-42 |
|
image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422 |
|
interactive: |
|
max_runtime: 24h |
|
jupyter: |
|
idle_timeout: 120m |
|
``` |
|
|
|
### Port |
|
You can also specify additional ports to be exposed during an interactive run. |
|
|
|
```yaml Simple interactive run definition |
|
name: gpu-interactive-run |
|
description: Run an interactive GPU-backed Jupyter and SSH server. |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: v1.l4-1.mem-42 |
|
image: quay.io/vessl-ai/ngc-pytorch-kernel:22.10-py3-202306140422 |
|
interactive: |
|
max_runtime: 24h |
|
jupyter: |
|
idle_timeout: 120m |
|
ports: |
|
- 8501 |
|
``` |
|
In this example, in addition to the default ports, 8501 will be exposed. Note that the `ports` field takes a list, so you can specify multiple ports if necessary. Also if you want to specify a TCP port, you can append `/tcp` to the port number; otherwise, `/http` is used implicitly. |
|
|
|
## Run a stable diffusion demo with GPU resources |
|
|
|
Now move onto running a stable diffusion demo. The following configuration outlines an interactive run set up to execute a Stable Diffusion demo utilizing a V100 GPU and exposing the interactive Streamlit demo on port 8501. |
|
```yaml Interactive run YAML for a Stable Diffsuion inference demo |
|
name: Stable Diffusion Web |
|
description: Run an inference web app of stable diffusion demo. |
|
image: nvcr.io/nvidia/pytorch:22.10-py3 |
|
resources: |
|
cluster: vessl-gcp-oregon |
|
preset: v1.l4-1.mem-42 |
|
run: |
|
- command: | |
|
exit |
|
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) |
|
bash webui.sh |
|
interactive: |
|
max_runtime: 24h |
|
jupyter: |
|
idle_timeout: 120n |
|
ports: |
|
-8501 |
|
``` |
|
In this interactive run, the Docker image `image=nvcr.io/nvidia/pytorch:22.10-py3` is utilized, and a V100 GPU (`resources.preset=v1.v100-1.mem-52`) is allocated for the run. The interactive run is designed to run for 24 hours (`interactive.max_runtime: 24h`) and the Streamlit demo will be accessible via port 8501. |
|
|
|
The run commands first execute a bash script from a remote location (`bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)`), followed by the execution of `webui.sh`. |
|
|
|
This configuration provides an example of setting up an interactive run for executing a GPU-accelerated demo with real-time user interaction facilitated via a specified port. |
|
|
|
## What's Next |
|
For more advanced configurations and examples. please visit [VESSL Hub](https://vesslai.notion.site/9e42f785bbdf42379b2112b859d8c873?v=8d1527bc18154381b9baf35d4068b227&pvs=4). |
|
<Card |
|
title="VESSL Hub" |
|
icon="database" |
|
href="https://vesslai.notion.site/9e42f785bbdf42379b2112b859d8c873?v=8d1527bc18154381b9baf35d4068b227&pvs=4" |
|
> |
|
A variatey of YAML examples that you can use as references |
|
</Card> |
|
|