Prerequisites
- Latitude.sh servers (at least 3 control-plane nodes and 2 worker nodes).
- kubectl installed on your local machine.
- A DNS A-record pointing to the control-plane nodes.
- Properly configured IP addresses for all nodes.
- SSH access to all nodes with
rootprivileges. - A
server tokenandagent tokento securely join nodes to the cluster. - Cilium as the CNI plugin
Step 1: Prepare DNS and generate tokens
Set up DNS record
Set up a DNS A-record that points to the external IP addresses of all control-plane nodes. This record will serve as the Kubernetes API endpoint and the RKE2 registration address.
Step 2: Configure RKE2 on the first control-plane node
On the first server, create the RKE2 configuration file at/etc/rancher/rke2/config.yaml with the following content:
<server-token> and <agent-token> with the generated tokens.
Step 3: Install RKE2
Set RKE2 version
Set the desired RKE2 version as an environment variable:
Install RKE2 server
Run the following command to install the RKE2 distribution on the first control-plane node:
Step 4: Configure additional control-plane nodes
Get tokens from first node
After deploying the first control-plane node, use the tokens located at
/var/lib/rancher/rke2/server as the server token and /var/lib/rancher/rke2/server/agent-token as the agent token.The
server option applies to all nodes except the first.Configure additional control-plane nodes
Create the configuration file Example for Repeat this process for all control-plane nodes, updating the node-name and IP addresses accordingly.
/etc/rancher/rke2/config.yaml on each additional control-plane node.Example for control-plane-02:control-plane-03:Configure worker nodes
Install agent nodes and create the following configuration file at
/etc/rancher/rke2/config.yaml:Step 5: Acquire the kubeconfig file
Locate kubeconfig
Locate the kubeconfig file on any control-plane node at
/etc/rancher/rke2/rke2.yaml.The file should look similar to this:Update server address
Replace the server value with the external registration address:
server: https://rke2.ext.example.com:6443Step 6: Configure Cilium for kube-proxy replacement
Create Cilium configuration
Create a file named
rke2-cilium-values.yamlwith this content:You must specify
KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT in the configuration to enable kube-proxy replacement mode. See the HelmChartConfig for more information.Step 7: Check cluster status
Check pod status
Run the following to check the status of all pods in all namespaces:Ensure all pods are in the Running state.