---
layout: docs
page_title: Use static host volumes
description: >-
  Configure and deploy a host volume to support a MySQL workload that requires
  persistent storage.
---

# Use static host volumes

Static host volumes can manage storage for stateful workloads running inside your
Nomad cluster. This tutorial walks you through deploying a MySQL workload using
a static host volume for persistent storage. Unlike [dynamic host volumes][dhv_tutorial], static
host volumes are defined in the Nomad client configuration and their lifecycle is
managed outside of Nomad.

Static host volumes provide a workload-agnostic way to specify resources,
available for Nomad drivers like `exec`, `java`, and `docker`. Refer to the
[`host_volume` specification][`host_volume`] for more information.
Nomad is aware of host volumes during the scheduling process,
which enables it to make scheduling decisions based on the availability of
static host volumes on a specific client.

This can be contrasted with Nomad support for Docker volumes. Because Docker
volumes are managed outside of Nomad and the Nomad scheduler is not aware of
them, Docker volumes have to either be deployed to all clients or operators have
to use an additional, manually-maintained constraint to inform the scheduler
where they are present.

## Prerequisites

To perform the tasks described in this guide, you need to have a Nomad
environment with Consul installed. Consul is used for service discovery and is not necessary for
static host volumes. You can use this [Terraform environment][nomad-tf] to provision a
sandbox environment. This tutorial will assume a cluster with one server node and three
client nodes.

<Note>

 This tutorial is for demo purposes and only assumes a single server
node. Please consult the [reference architecture][reference-arch] for
production configuration.

</Note>

### Install the MySQL client

You will use the MySQL client to connect to our MySQL database and verify our data.
Ensure it is installed on a node with access to port 3306 on your Nomad clients:

Ubuntu:

```shell-session
$ sudo apt install mysql-client
```

CentOS:

```shell-session
$ sudo yum install mysql
```

macOS via Homebrew:

```shell-session
$ brew install mysql-client
```

## Build the static host volume

### Create a target directory

On a Nomad client node in your cluster, create a directory that will be used for
persisting the MySQL data. For this example, let's create the directory
`/opt/mysql/data`.

```shell-session
$ sudo mkdir -p /opt/mysql/data
```

You might need to change the owner on this folder if the Nomad client does not
run as the `root` user.

```shell-session
$ sudo chown «Nomad user» /opt/mysql/data
```

### Configure the client

Edit the Nomad configuration on this Nomad client to create the static host volume
definition.

Add a [`host_volume`] block to the `client` block of your Nomad configuration:

```hcl
  host_volume "mysql" {
    path      = "/opt/mysql/data"
    read_only = false
  }
```

Save this change, and then restart the Nomad service on this client to make the
static host volume active. While still on the client, you can verify that the host
volume is configured by using the `nomad node status` command as shown below:

```shell-session
$ nomad node status -short -self
ID           = 12937fa7
Name         = ip-172-31-15-65
Class        = <none>
DC           = dc1
Drain        = false
Eligibility  = eligible
Status       = ready
Host Volumes = mysql
Drivers      = docker,exec,java,mock_driver,raw_exec,rkt
...
```

## Deploy MySQL

### Create the job file

You are now ready to deploy a MySQL database that can use the static host volumes for
storage. Create a file called `mysql.nomad.hcl` and provide it the following
contents:

```hcl
job "mysql-server" {
  datacenters = ["dc1"]
  type        = "service"

  group "mysql-server" {
    count = 1

    volume "mysql" {
      type      = "host"
      read_only = false
      source    = "mysql"
    }

    restart {
      attempts = 10
      interval = "5m"
      delay    = "25s"
      mode     = "delay"
    }

    task "mysql-server" {
      driver = "docker"

      volume_mount {
        volume      = "mysql"
        destination = "/var/lib/mysql"
        read_only   = false
      }

      env = {
        "MYSQL_ROOT_PASSWORD" = "password"
      }

      config {
        image = "hashicorp/mysql-portworx-demo:latest"

        ports = ["db"]
      }

      resources {
        cpu    = 500
        memory = 1024
      }

      service {
        name = "mysql-server"
        port = "db"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
    network {
      port "db" {
        static = 3306
      }
    }
  }
}
```

#### Notes about the above job specification

- The service name is `mysql-server` which you will use later to connect to the
  database.

- The `read_only` argument is supplied on all of the volume-related stanzas in
  to help highlight all of the places you would need to change to make a
  read-only volume mount. Please see the [`host_volume`], [`volume`], and
  [`volume_mount`] specifications for more details.

- For lower-memory instances, you might need to reduce the requested memory in
  the resources stanza to harmonize with available resources in your cluster.

### Run the job

Register the job file you created in the previous step with the following
command:

```shell-session
$ nomad run mysql.nomad.hcl
==> Monitoring evaluation "aa478d82"
    Evaluation triggered by job "mysql-server"
    Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "aa478d82" finished with status "complete"
```

Check the status of the allocation and ensure the task is running:

```shell-session
$ nomad status mysql-server
ID            = mysql-server
...
Summary
Task Group    Queued  Starting  Running  Failed  Complete  Lost
mysql-server  0       0         1        0       0         0
```

## Write data to MySQL

### Connect to MySQL

Using the mysql client (installed [earlier][install_mysql]), connect to the
database and access the information:

```shell-session
$ mysql -h mysql-server.service.consul -u web -p -D itemcollection
```

The password for this demo database is `password`.

<Note>

 This tutorial is for demo purposes and does not follow best
practices for securing database passwords. See [Keeping Passwords
Secure][password-security] for more information.

</Note>

Consul is installed alongside Nomad in this cluster so you are able to
connect using the `mysql-server` service name you registered with our task in
our job file.

### Add test data

Once you are connected to the database, verify the table `items` exists:

```sql
mysql> show tables;
+--------------------------+
| Tables_in_itemcollection |
+--------------------------+
| items                    |
+--------------------------+
1 row in set (0.00 sec)
```

Display the contents of this table with the following command:

```sql
mysql> select * from items;
+----+----------+
| id | name     |
+----+----------+
|  1 | bike     |
|  2 | baseball |
|  3 | chair    |
+----+----------+
3 rows in set (0.00 sec)
```

Now add some data to this table (after you terminate our database in Nomad and
bring it back up, this data should still be intact):

```sql
mysql> INSERT INTO items (name) VALUES ('glove');
```

Run the `INSERT INTO` command as many times as you like with different values.

```sql
mysql> INSERT INTO items (name) VALUES ('hat');
mysql> INSERT INTO items (name) VALUES ('keyboard');
```

Once you are done, type `exit` and return back to the Nomad client command
line:

```sql
mysql> exit
Bye
```

## Destroy the database job

Run the following command to stop and purge the MySQL job from the cluster:

```shell-session
$ nomad stop -purge mysql-server
==> Monitoring evaluation "6b784149"
    Evaluation triggered by job "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "6b784149" finished with status "complete"
```

Verify no jobs are running in the cluster:

```shell-session
$ nomad status
No running jobs
```

In more advanced cases, the directory backing the static host volume could be a mounted
network filesystem, like NFS, or cluster-aware filesystem, like GlusterFS or SeaweedFS. This
can enable more complex, automatic failure-recovery scenarios in the event of a
node failure.

## Re-deploy and verify

Using the `mysql.nomad.hcl` job file [from earlier][create_job], re-deploy the
database to the Nomad cluster.

```shell-session
$ nomad run mysql.nomad.hcl
==> Monitoring evaluation "61b4f648"
    Evaluation triggered by job "mysql-server"
    Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "61b4f648" finished with status "complete"
```

Once you re-connect to MySQL, you should be able to see that the information you
added prior to destroying the database is still present:

```sql
mysql> select * from items;
+----+----------+
| id | name     |
+----+----------+
|  1 | bike     |
|  2 | baseball |
|  3 | chair    |
|  4 | glove    |
|  5 | hat      |
|  6 | keyboard |
+----+----------+
6 rows in set (0.00 sec)
```

## Clean up

Once you have completed this guide, you should perform the following cleanup steps:

- Stop and purge the `mysql-server` job.

- Remove the `host_volume "mysql"` stanza from your Nomad client configuration
  and restart the Nomad service on that client

- Remove the /opt/mysql/data folder and as much of the directory tree that you
  no longer require.

## Summary

In this guide, you configured a static host volume on a Nomad client using a client-local
directory. You created a job that mounted this volume to a Docker MySQL container
and wrote data that persisted beyond the job's lifecycle.

[`host_volume`]: /nomad/docs/configuration/client#host_volume-block
[`volume_mount`]: /nomad/docs/job-specification/volume_mount
[`volume`]: /nomad/docs/job-specification/volume
[create_job]: #create-the-job-file
[install_mysql]: #install-the-mysql-client
[nomad-tf]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud
[password-security]: https://dev.mysql.com/doc/refman/8.0/en/password-security.html
[reference-arch]: /nomad/docs/deploy/production/reference-architecture#high-availability
[dhv_tutorial]: /nomad/docs/stateful-workloads/dynamic-host-volumes

