Deploy a Nomad cluster with Consul service mesh using Terraform on bare metal
Download the contents of the repository
latitude.sh/examples
repository is the easiest way to get started. This directory contains all files required to deploy and set up the cluster.After creating the files to your local working directory, open the terminal and export your Latitude.sh API key to an environment variable and initialize Terraform.Set up variables
variables.tf
file contains all variables that are used in the Terraform configuration. You will change the values of these variables to customize your deployment, replacing the default values with your own.Variables that you will need to change:project_id
: Go to the dashboard, select the project you want to deploy the cluster on, and click on Project settings. Copy the ID from the top-right corner of the page.operating_system
: The operating system of choice to deploy your servers and clients. The default value is ubuntu_24_04_x64_lts
. The setup scripts are not guaranteed to work on other operating systems and might require modification.plan
: The slug of the plan you want to use. The slug is exactly like the plan name, but uses hyphens instead of dots. To use the c3.small.x86 plan, add c3-small-x86
as the plan variable value.nomad_server_count
and nomad_client_count
: The number of Nomad servers and clients you want to deploy. The default values are 1 for both. For production environments, it is recommended to use at least 3 servers and 3 clients.nomad_region
: The Latitude.sh location your cluster will be deployed to. The value is the location’s slug. Get the slug from the List all regions API endpoint.
nomad_vlan_id
: Go to the Latitude.sh dashboard. Click on Private networks in the sidebar. Create a VLAN on the location your cluster will be deployed to. The variable value is the VID.ssh_key_id
: Go to the Latitude.sh dashboard, click on Project settings and SSH Keys. Copy the ID of the SSH key you want to use.private_key_path
: The path to the private key file on your local machine. This is the private key that matches the public key you added to the project. You only need to change this if the path to the private key is different from ~/.ssh/id_rsa
.
Plan and apply your changes
Access your cluster
:4646
. For example,
if one of your Nomad servers’ public IP is 189.1.2.3, you will access the
Nomad UI at http://189.1.2.3:4646
.Deploying a job
nomad_server_count
and nomad_client_count
.
/etc/consul.d/consul.hcl
on all the existing servers in the cluster and add the private IP of the new server.nomad server join <server-private-addr>
. You can read more about this command at https://developer.hashicorp.com/nomad/docs/commands/server/join.nomad-server-1
the IP would be 10.8.0.1, for nomad-server-2
the IP is 10.8.0.2, and so on.nomad-client-1
the IP would be 10.8.0.10, for nomad-client-2
the IP is 10.8.0.11, and so on.nomad-client-1
isn’t showing up on your cluster, log in to nomad-server-1
and ping 10.8.0.10. The server should be pinging. Otherwise, the private network isn’t working.
Review your settings and reach out to us if you still have issues.
Check netplan and consul settings
/etc/netplan/50-cloud-init.yaml
and check if the settings look correct. Read Private networking if you’re unfamiliar with how Netplan should be set up.