Getting into Kubernetes can be a little intimidating. In this guide, I’ll show you how to set up a 6 node cluster using Rancher and some Ubuntu VMs.
To start, you’re going to need 6 servers:
- rancher-man01 – First node of the “management” cluster for Kubernetes. These will run rancher, and an assortment of system containers.
- rancher-man02 – Second node of the “management” cluster.
- rancher-man03 – Third node of the “management” cluster.
- rancher-work01 – First node of the “worker” cluster for Kubernetes. These will run your actual workloads.
- rancher-work02 – Second node of the “worker” cluster.
- rancher-work03 – Third node of the “worker” cluster.
You’re also going to need some sort of centralized DNS name that points to each of your rancher-man* servers. That should ideally include a load balancer, but if you don’t have one of those you can just set things up in DNS. For example, if our domain is .network.local, we can create A type DNS records like this:
- rancher.network.local – 10.0.0.51 (IP of rancher-man01)
- rancher.network.local – 10.0.0.52 (IP of rancher-man02)
- rancher.network.local – 10.0.0.53 (IP of rancher-man03)
For now, only create an entry for rancher-man01. We’ll add the others later.
After those are all online and ready, you need to run a few commands to get the rancher server up and running. Start with rancher-man01.
All commands should be run as root. (sudo su -)
rancher-man01
First, create a configuration file and populate it with some basic information:
mkdir -p /etc/rancher/rke2 vim /etc/rancher/rke2/config.yaml #config.yml content below: token: my-shared-secret # Change this to a nice secure string, you'll share it between your cluster nodes tls-san: - rancher.network.local
Then, install RancherD, and start the service.
curl -sfL https://get.rancher.io | sh - systemctl enable rancherd-server.service systemctl start rancherd-server.service
Use this command to watch the RancherD service start up. It may take a few minutes for everything to stabilize, and you’ll probably see some error messages. This is normal.
journalctl -eu rancherd-server -f # You should see something like this: level=info msg="Handling backend connection request [rancher-man01]"
Now you can set some environment variables and check on the kubernetes status
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml PATH=$PATH:/var/lib/rancher/rke2/bin kubectl get daemonset rancher -n cattle-system kubectl get pod -n cattle-system
You should see something like this:
root@rancher-man01:~# kubectl get daemonset rancher -n cattle-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE rancher 1 1 1 1 1 node-role.kubernetes.io/master=true 4m13s root@rancher-man01:~# kubectl get pod -n cattle-system NAME READY STATUS RESTARTS AGE helm-operation-4vch8 0/2 Completed 0 2m46s helm-operation-84rfh 0/2 Completed 0 3m22s helm-operation-dt4gn 0/2 Completed 0 2m20s helm-operation-xspdr 0/2 Completed 0 3m2s helm-operation-z9b48 0/2 Completed 0 2m30s rancher-c46b6 1/1 Running 0 4m25s rancher-webhook-798c5599d9-67ch7 1/1 Running 0 2m26s
Now that rancher is up and running, you need to reset the admin password so that you can log into the web interface.
rancherd reset-admin
That should give you a temporary password that you can use to log into the web interface, which will be at https://rancher.network.local:8443, or whatever other DNS name you set earlier in this tutorial. For now though, just write it down. We need to set the other cluster members up first.
If you lose that password, you can just run that command again to get a new one.
rancher-man02
Setup on this node will be pretty similar to the first one, but with a few minor changes. First, we’re going to create that config.yaml file, but we’re going to add a reference to the existing node.
mkdir -p /etc/rancher/rke2 vim /etc/rancher/rke2/config.yaml #config.yml content below: server: https://rancher.network.local:9345 token: my-shared-secret # Use the same secret that you set on rancher-man01 tls-san: - rancher.network.local
Then, install RancherD, and start the service.
curl -sfL https://get.rancher.io | sh - systemctl enable rancherd-server.service systemctl start rancherd-server.service
Use this command to watch the RancherD service start up. It may take a few minutes for everything to stabilize, and you’ll probably see some error messages. This is normal. If you find that it’s taking a really long time and doesn’t seem to resolve itself, rebooting the server might help.
journalctl -eu rancherd-server -f # You should see something like this: level=info msg="Handling backend connection request [rancher-man02]"
Now you can set some environment variables and check on the Kubernetes status
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml PATH=$PATH:/var/lib/rancher/rke2/bin kubectl get daemonset rancher -n cattle-system kubectl get pod -n cattle-system
You should see something like this:
root@rancher-man02:~# kubectl get daemonset rancher -n cattle-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE rancher 2 2 2 2 2 node-role.kubernetes.io/master=true 90m root@rancher-man02:~# kubectl get pod -n cattle-system NAME READY STATUS RESTARTS AGE rancher-bkl5v 1/1 Running 0 17m rancher-webhook-798c5599d9-5r8zj 1/1 Running 0 88m rancher-z4prp 1/1 Running 0 90m
Then, just do the same thing on rancher-man03.
When both of those are done, update DNS with the extra records I described above, and then you should be able to hit https://rancher.network.local:8443 and log in with the username “admin” and the password that was generated earlier.
Now, we can create the worker cluster.
Rancher-work*
- First, you need to install docker on all of your rancher-work nodes. You can do that by running this set of commands:
apt update apt install -y apt-transport-https ca-certificates curl gnupg curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null apt update apt install -y docker-ce docker-ce-cli containerd.io # After docker is installed, you can verify the installation with this: docker run hello-world
- In the Rancher web UI, click the Add Cluster button at the top right of the page.
- Select Existing Nodes
- Name the cluster something useful. I’m going to call mine lab.
- At the time of writing this, Canal is the default network provider for Rancher, so I will be using that. However, there are other options that you may wish to choose. Take some time to do research and compare the different options available.
- All of the other options can be left at their defaults, but feel free to research the different options on your own.
- Once you create the cluster, you’ll be brought to a screen with 3 checkboxes and a big preset command at the bottom of it. Ensure that all 3 boxes are checked, and then run that whole command on each of the rancher-work servers.
- A minute or so after the command runs, you should see “3 new nodes have registered” at the bottom of the Rancher web page. When the nodes are registered, you can just click Done at the bottom of the page.
Now, you can create some DNS entries to make a nice load balanced cluster:
- rancher-work.network.local – 10.0.0.54 (IP of rancher-work01)
- rancher-work.network.local – 10.0.0.55 (IP of rancher-work02)
- rancher-work.network.local – 10.0.0.56 (IP of rancher-work03)
Once that’s done, you have your kubernetes cluster. Now we just need to run a workload.
- In the Rancher UI, click on Cluster Explorer at the top right of the page.
- On the left, click Deployments
- At the top right, click Create
- For a name, just enter something descriptive. I’m going to use nginx-test
- For a container image, I’m going to use nginxdemos/hello. It’s a nice test container.
- Click Add Port
- For Service Type, you have a few options
- Do not create a service – Gives you the option to create the service definition later
- Cluster IP – Puts the service on an IP only accessible from within the kubernetes cluster. Useful if you want containers to be able to communicate with each other.
- Node Port – Publishes the service on each of the nodes in your cluster.
- Load Balancer – Exposes the service using a cloud provider’s load balancer. Not useful if you’re not using AWS, Google, etc.
- We’re actually going to be creating something called an “ingress” to point at this nginx container, which is essentially a load balancer that runs on top of Kubernetes. So for now, we’re going to choose Cluster IP.
- For name, you can just set a friendly name for the service. I’m using http in this instance.
- That container exposes nginx on port 80, so set that as the “Private Container Port”, and keep TCP selected.
- With that set, go to Health Check on the left. For the readiness check, set the type to HTTP, and the check port to 80. Set the path to /. Do the same for the Liveness and Startup checks.
- You can check out the rest of the options, but that’s all we need for now. Click Create at the bottom right of the page. You container will start up in a few seconds.
- Now, go to Ingresses on the left. Click Create at the top right of that page.
- Set the name to nginx-test
- The rest of the values here are useful for routing to different services running on the same cluster, but for this example, we’re just going to set the path to /, the target service to nginx-test, and select port 80.
Now, if you visit http://rancher-work.network.local you should see a nice nginx screen showing the IP address and hostname of the container you’re hitting.
Expanding The Deployment
You may notice, if you refresh your browser, you’ll just see one IP and hostname, even though we have 3 work servers supporting our container deployment. That’s because the “Replicas” value for our deployment is only set to 1, so we only have 1 container deployed. Let’s change that.
- Go to Deployments on the left, and click the 3 dots on the right side next to nginx-test, select Edit Yaml
- Around line 137, change replicas to 3.
- Then click Save at the bottom. In a few seconds, the new containers should be deployed.
- If you go back to your browser and refresh http://rancher-work.network.local a few times, you should see 3 different server names and IPs shown.
- Wanna take it further? Change replicas to 100 and see what happens. You can see the individual containers being created under “Pods” on the left.
And that’s the extent of my tutorial. There’s a ton more to Rancher and Kubernetes, so it’s certainly not exhaustive, but hopefully this guide helps you get things started.
Like this article? Have questions? Want another post about something else?
Put a comment below and let me know.