Prerequisites
- Latitude.sh servers (at least 3 control-plane nodes and 2 worker nodes).
- kubectl installed on your local machine.
- A DNS A-record pointing to the control-plane nodes.
- Properly configured IP addresses for all nodes.
- SSH access to all nodes with rootprivileges.
- A server tokenandagent tokento securely join nodes to the cluster.
- Cilium as the CNI plugin
Step 1: Prepare DNS and generate tokens
1
Set up DNS record
Set up a DNS A-record that points to the external IP addresses of all control-plane nodes. This record will serve as the Kubernetes API endpoint and the RKE2 registration address.
2
Generate tokens
Run the following command twice to create a server-token (for control-plane nodes) and an agent-token (for worker nodes):Save both tokens for use during the cluster setup.
Step 2: Configure RKE2 on the first control-plane node
On the first server, create the RKE2 configuration file at/etc/rancher/rke2/config.yaml with the following content:
<server-token> and <agent-token> with the generated tokens.
Step 3: Install RKE2
1
Set RKE2 version
Set the desired RKE2 version as an environment variable:
2
Install RKE2 server
Run the following command to install the RKE2 distribution on the first control-plane node:
3
Start RKE2 service
Enable RKE2 to start on boot, start the service, and observe the service logs:
Step 4: Configure additional control-plane nodes
1
Get tokens from first node
After deploying the first control-plane node, use the tokens located at 
/var/lib/rancher/rke2/server as the server token and /var/lib/rancher/rke2/server/agent-token as the agent token.The 
server option applies to all nodes except the first.2
Configure additional control-plane nodes
Create the configuration file Example for Repeat this process for all control-plane nodes, updating the node-name and IP addresses accordingly.
/etc/rancher/rke2/config.yaml on each additional control-plane node.Example for control-plane-02:control-plane-03:3
Configure worker nodes
Install agent nodes and create the following configuration file at 
/etc/rancher/rke2/config.yaml:4
Install RKE2 agent
Run Enable RKE2 to start on boot, start the service, and observe the service logs:
export RKE2_VERSION=v1.31.1+rke2r1Install the RKE2 distribution:Step 5: Acquire the kubeconfig file
1
Locate kubeconfig
Locate the kubeconfig file on any control-plane node at 
/etc/rancher/rke2/rke2.yaml.The file should look similar to this:2
Update server address
Replace the server value with the external registration address:
server: https://rke2.ext.example.com:64433
Copy to local machine
Copy the file to your local machine and save it as 
~/.kube/config or another location (e.g., ~/.kube/test).Use the following command to check the cluster nodes:Step 6: Configure Cilium for kube-proxy replacement
1
Create Cilium configuration
Create a file named 
rke2-cilium-values.yamlwith this content:You must specify 
KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT in the configuration to enable kube-proxy replacement mode. See the HelmChartConfig for more information.2
Apply configuration
Run this command to apply the configuration:Check if the configuration is applied:Cilium is now configured to replace kube-proxy.
Step 7: Check cluster status
1
Check pod status
Run the following to check the status of all pods in all namespaces:Ensure all pods are in the Running state.
2
Check node status
Check the status of all nodes:Nodes should show Ready in the Status column.