Developed in collaboration with eOracle, this guide will walk you through the process of setting up a K8s cluster on bare metal using RKE2. RKE2 (Rancher Kubernetes Engine 2) is a lightweight, secure, and production-ready Kubernetes distribution designed to simplify cluster deployment and management. Let’s dive in!

Prerequisites

  • Latitude.sh servers (at least 3 control-plane nodes and 2 worker nodes).
  • kubectl installed on your local machine.
  • A DNS A-record pointing to the control-plane nodes.
  • Properly configured IP addresses for all nodes.
  • SSH access to all nodes with root privileges.
  • A server token and agent token to securely join nodes to the cluster.
  • Cilium as the CNI plugin

Step 1: Prepare DNS and generate tokens

1

Set up DNS record

Set up a DNS A-record that points to the external IP addresses of all control-plane nodes. This record will serve as the Kubernetes API endpoint and the RKE2 registration address.
2

Generate tokens

Run the following command twice to create a server-token (for control-plane nodes) and an agent-token (for worker nodes):
openssl rand -hex 32
Save both tokens for use during the cluster setup.

Step 2: Configure RKE2 on the first control-plane node

On the first server, create the RKE2 configuration file at /etc/rancher/rke2/config.yaml with the following content:
token: <server-token>
agent-token: <agent-token>
tls-san:
  - rke2.ext.example.com
  - <control_plane_ip>
node-name: control-plane-01
advertise-address: <control_plane_ip>
node-ip: <node_internal_ip>
node-external-ip: <node_external_ip>
disable:
  - rke2-ingress-nginx
cni: "cilium"
disable-kube-proxy: true
Replace <server-token> and <agent-token> with the generated tokens.

Step 3: Install RKE2

1

Set RKE2 version

Set the desired RKE2 version as an environment variable:
export RKE2_VERSION=v1.31.1+rke2r1
2

Install RKE2 server

Run the following command to install the RKE2 distribution on the first control-plane node:
curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=${RKE2_VERSION} INSTALL_RKE2_TYPE=server INSTALL_RKE2_CHANNEL=stable INSTALL_RKE2_METHOD=tar sh -
3

Start RKE2 service

Enable RKE2 to start on boot, start the service, and observe the service logs:
systemctl enable rke2-server.service && systemctl restart rke2-server.service & journalctl -u rke2-server -f

Step 4: Configure additional control-plane nodes

1

Get tokens from first node

After deploying the first control-plane node, use the tokens located at /var/lib/rancher/rke2/server as the server token and /var/lib/rancher/rke2/server/agent-token as the agent token.
The server option applies to all nodes except the first.
2

Configure additional control-plane nodes

Create the configuration file /etc/rancher/rke2/config.yaml on each additional control-plane node.Example for control-plane-02:
server: https://rke2.ext.example.com:9345
token: <server-token>
agent-token: <agent-token>
tls-san:
  - rke2.ext.example.com
  - <control_plane_ip>
node-name: control-plane-02
advertise-address: <control_plane_ip>
node-ip: <node_internal_ip>
node-external-ip: <node_external_ip>
disable:
  - rke2-ingress-nginx
cni: "cilium"
disable-kube-proxy: true
Example for control-plane-03:
server: https://rke2.ext.example.com:9345
token: <server-token>
agent-token: <agent-token>
tls-san:
  - rke2.ext.example.com
  - <control_plane_ip>
node-name: control-plane-03
advertise-address: <control_plane_ip>
node-ip: <node_internal_ip>
node-external-ip: <node_external_ip>
disable:
  - rke2-ingress-nginx
cni: "cilium"
disable-kube-proxy: true
Repeat this process for all control-plane nodes, updating the node-name and IP addresses accordingly.
3

Configure worker nodes

Install agent nodes and create the following configuration file at /etc/rancher/rke2/config.yaml:
server: https://rke2.ext.example.com:9345
token: <agent-token>
node-name: worker-node-01
node-ip: <worker_node_internal_ip>
node-external-ip: <worker_node_external_ip>
node-label:
  - "node.kubernetes.io/role=worker"
4

Install RKE2 agent

Run export RKE2_VERSION=v1.31.1+rke2r1Install the RKE2 distribution:
curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=${RKE2_VERSION} INSTALL_RKE2_TYPE=agent INSTALL_RKE2_CHANNEL=stable INSTALL_RKE2_METHOD=tar sh -
Enable RKE2 to start on boot, start the service, and observe the service logs:
systemctl enable rke2-agent.service && systemctl restart rke2-agent.service & journalctl -u rke2-agent -f

Step 5: Acquire the kubeconfig file

1

Locate kubeconfig

Locate the kubeconfig file on any control-plane node at /etc/rancher/rke2/rke2.yaml.The file should look similar to this:
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <CERTIFICATE_DATA>
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: <CLIENT_CERTIFICATE_DATA>
    client-key-data: <CLIENT_KEY_DATA>
2

Update server address

Replace the server value with the external registration address:server: https://rke2.ext.example.com:6443
3

Copy to local machine

Copy the file to your local machine and save it as ~/.kube/config or another location (e.g., ~/.kube/test).Use the following command to check the cluster nodes:
kubectl get nodes --kubeconfig ~/.kube/test

Step 6: Configure Cilium for kube-proxy replacement

1

Create Cilium configuration

Create a file named rke2-cilium-values.yamlwith this content:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-cilium
  namespace: kube-system
spec:
  valuesContent: |-
    k8sServiceHost: "rke2.ext.example.com"
    k8sServicePort: "6443"
    kubeProxyReplacement: "true"
You must specify KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT in the configuration to enable kube-proxy replacement mode. See the HelmChartConfig for more information.
2

Apply configuration

Run this command to apply the configuration:
kubectl apply -f ./rke2-cilium-values.yaml --kubeconfig ~/.kube/test
Check if the configuration is applied:
helmchartconfig.helm.cattle.io/rke2-cilium created
Cilium is now configured to replace kube-proxy.

Step 7: Check cluster status

1

Check pod status

Run the following to check the status of all pods in all namespaces:
kubectl get pods -A --kubeconfig ~/.kube/test
Ensure all pods are in the Running state.
2

Check node status

Check the status of all nodes:
kubectl get nodes --kubeconfig ~/.kube/test
Nodes should show Ready in the Status column.

Troubleshooting

Add these aliases to perform troubleshooting on control-plane nodes:
alias kubectl='/var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml'
kubectl get nodes

alias crictl='/var/lib/rancher/rke2/bin/crictl'
export CONTAINER_RUNTIME_ENDPOINT="unix:///run/k3s/containerd/containerd.sock"
crictl ps

That’s it!

Your cluster is now set up and ready to handle workloads on bare metal. Check out our guide on Kubernetes load balancing to optimize traffic management across your nodes.