File size: 3,362 Bytes
76d9c4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
title: Default resource specs
version: EN
---
<img style={{ borderRadius: '0.5rem' }}
src="/images/clusters/specs/1_specs.png"
/>
Under **Resource Specs**, you can set custom resource presets that users can only select and use to launch ML workloads. You can also specify the **priority** of the defined options. For example, when you set the resource specs as above users will only be able to select the four options below.
<img style={{ borderRadius: '0.5rem' }}
src="/images/clusters/specs/2_resource.png"
/>
These default options can help admins optimize resource usage by (1) preventing someone from occupying an excessive number of GPUs and (2) preventing unbalanced resource requests which cause skewed resource usage. As for average users, they can simply get going without thinking and configuring the exact number of CPU cores and memories they need to request.
## Step-by-step Guide
Click **New resource spec** and define the following parameters.
<img style={{ borderRadius: '0.5rem' }}
src="/images/clusters/specs/3_add.png"
/>
* **Name** β Set a name for the preset. Use names that well represent the preset like `a100-2.mem-16.cpu-6`.
* **Processor type** β Define the preset by the processor type, either by CPU or GPU. 
* **CPU limit** β Enter the number of CPUs. For `a100-2.mem-16.cpu-6`, enter `6`.
* **Memory limit** β Enter the amount of memory in GB. For `a100-2.mem-16.cpu-6`, the number would be 16.
* **GPU type** β Specify which GPU you are using. You can get this information by using the `nvidia-smi` command on your server. In the following example, the value is `a100-sxm-80gb`.
```bash
nvidia-smi
```
```bash
Thu Jan 19 17:44:05 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.08 Driver Version: 510.73.08 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 <a data-footnote-ref href="#user-content-fn-1">NVIDIA A100-SXM</a>... On | 00000000:01:00.0 Off | 0 |
| N/A 40C P0 64W / 275W | 0MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
```
* **GPU limit** β Enter the number of GPUs. For `gpu2.mem16.cpu6`, enter `2`. You can also place decimal values if you are using Multi-Instance GPUs (MIG).
* **Priority** β Using different values for priority disables FIFO scheduler and assigns workloads according to priority, with lower priority being first. The example preset below always puts workloads running on `gpu-1` ahead of any other workloads. 
<img style={{ borderRadius: '0.5rem' }}
src="/images/clusters/specs/4_list.png"
/>
* **Available workloads** β Select the type of workloads that can use the preset. With this, you can guide users to use ποΈ **Experiments** by preventing them from running οΈ **Workspaces** with 4 or 8 GPUs.   |