How to Install and create K8s cluster on Linux

Meher Askri
9 min readAug 2, 2024

--

Hello and welcome to my first article in this orchestration section , Today, we’ll delve into the fascinating world of Kubernetes or shortly K8s .In this guide, I’ll walk you through the process of installing and setting up your very own minimal Kubernetes cluster on RHEL 9 . But before we jump into the CLI, let’s take a moment to understand the basics :

What is Kubernetes ?

Kubernetes is an open source container orchestration system for managing containers. It automates deploying, scaling, and managing applications in these containers, making it easier to run and maintain complex software .

For those of you who are still getting confused with K8s, I encourage you to start by learning about containers first. (I will definitely create some contents related to containers in the future to help you understand the bigger picture) .

Before we dive into the installation of Kubernetes, let me explain the key component here: :

So , whether you’re working with a standalone container architecture or orchestrating with Kubernetes, you’ll always need the Container Runtime Interface (CRI). In simpler terms, the container runtime is a crucial software component responsible for initiating and terminating containers ( start and stop ). It serves as the underlying layer that interacts with the kernel to create isolated environments for containers.

The purpose of this article is to focus on Kubernetes not Containers specifically . so I won’t delve into detailed explanations of containers and kernel features such as Cgroup and namespaces here , I just want to highlight the importance of CRI (container runtime) because, as I mentioned earlier, it’s an essential component needed whether you’re running containers manually using a container engine (like Docker or Podman) or automatically with Kubernetes.

Preparing Kubernetes requirements

Let’s get started. I’m going to use RHEL 9 as a control plane node, often referred to as the master node with 2 CPUs, 4GB of RAM, and 20GB of storage.

I’ll show you the steps manually and as always , I’ll provide a script at the end to simplify the process

This journey might be a bit lengthy, so please stay focused, and don’t miss any steps 😁 .

The first thing we need to do is set up the Fully Qualified Domain Name (FQDN) for our Kubernetes server and add the corresponding entry to the Local DNS resolver (/etc/hosts) for hostname resolution( I’m using “ master.askri.lab “ as an FQDN for this demonstration ) .

Next , if you are on any Red Hat distro , you need to disable Selinux , because they are not closed friends 😂 ( just kidding, it’s because Kubernetes doesn’t provide an SeLinux policy) .

I’ve already done that . All you need to do is change the line in the /etc/selinux/config file to disabled and then reboot your system .

Next, we must load two essential Kernel modules, “ overlay “ and “ br_netfilter “ . These two modules are necessary for Kubernetes, with “ br_netfilter “ for enabling bridge network connections and “overlay “ for facilitating the layering of one filesystem on top of another (For more detail use modinfo command or check the kernel documentation webpage ) .

So, modprobe to load a module manually and then we add them in a config file under /etc/modules-load.d to load them automatically at boot time .

Next , we need to enable some kernel features ( sysctl parameters ) related to network which are required for kubernetes :

The first parameter, “ net.ipv4.ip_forward “ : allows the Linux kernel to forward packets between network interfaces . The second parameter enables IPv6 filtering on bridges and the last one “net.bridge.bridge-nf-call-iptables “ : ensures that iptables rules are applied to filter packet traffic on bridges (To implement these changes, we use the “ sysctl “ command ) .

But we’re not finished with our Linux system configuration yet. As a final step, let’s disable the swap partition. Why do we need to do this ?

Remember when we disabled Selinux in the beginning and I told you K8s and Selinux don’t quite get along, Well, here’s another one to add to that list . Swap also is not a closed friend to K8s . It’s like trying to get cats and dogs to dance together 😂.

So, let’s disable the swap partition using the “ swapoff “ command. Then, I’ll use the “ sed “ command to locate the line in the configuration file that corresponds to the swap partition and comment it out by adding a ‘# ‘ at the beginning of the line . As you can see, the swap memory will be set to 0.

And that’s it, we’ve completed the first part of the part .

Installing Kubernetes Components

OK ,Let’s move on to the second part of our setup. Our first task here is to install the container runtime, which in our case is Containerd. Unfortunately, it’s not available in the DNF repositories. So, we’ll install it from Docker by adding the Docker repo first :

One thing I’d like to mention here is that, as you can see, I pulled from the docker/centos repo instead of docker/rhel , because there was an issue with the repository. Hopefully they will fix it in the future .

Next , we need to remove the containerd config file (config.toml) under /etc/containerd/ and create new one :

So , let’s take a closer look at the contents of this configuration file:

version 2 = This line indicates the version of the configuration file format.

Under the “ plugins “ group, there are four key sections:

discard_unpacked_layers = true : this option instructs containerd to discard the layers that are unpacked when an image is pulled. In simpler terms, it helps save disk space by removing intermediate image layers that are no longer needed.

runtime_type = “io.containerd.runc.v2” : means we are going to use the container runtime version 2 ( Runc is the command-line tool for running containers ) .

SystemdCgroup = true : enabling this option means that we’ll utilize systemd cgroups for managing containers. In other words, systemd leverages the Cgroup kernel feature to effectively isolate and control processes within containers..

Just I wanted to explain the containerd config file to highlight its importance. Containerd is what the Kubernetes API server will communicate with (We’ll dive deeper into this when we create pods on the worker nodes, but that’s a topic for another day 😁).

Now, let’s proceed by adding the Kubernetes (K8s) repo and I’m going to install vanilla K8s (plain Kubernetes) from the Google Cloud repository, not a specific commercial distribution .

As you can see , I installed those three packages and their dependencies :

Kubeadm : which is the command-line tool for bootstrapping a Kubernetes cluster .

Kubectl : which is the command-line tool for interacting with a Kubernetes cluster .

kubelet : which is an agent that runs on each node in the Kubernetes cluster. Its primary responsibility is to ensure that containers are running in the Pods on that node .

Now , final little steps before we initialize the cluster , we need to enable the services and open the right ports :

Et voilà, the services (Containerd and Kubelet) are enabled , and the required ports are open.

Let me give you a quick overview of these ports:

  • 2379_ 2380: for the etcd server client API.
  • 6443: for the Kubernetes API server.
  • 10250: for the Kubelet API.
  • 10251: used by the kube-scheduler.
  • 10252: for the kube-controller-manager.

And that’s it , Let’s create our cluster using kubeadm init command :

While creating the cluster , It might take a few minutes, so let’s have some coffee and come back in a bit 😁 .

After , the cluster created successfully :

we have a few more tasks to tackle. First, we need to create a client (user) on this master node, and you can do that by copying this command.

Remember when I mentioned that I’m going to install plain k8s which means a basic Kubernetes without any additional plugins , so we need to a network add-on for the communication between pods , Luckily Kubernetes comes with the network container interface (CNI) which a generic interface that allows different plugins to be used .

There’s a variety of network add-ons available, and I’ve decided to go with Calico, which is one of the most commonly used plugins. You can find more information about it on the project’s GitHub page ( https://github.com/projectcalico/calico/) .

As I mentioned earlier, I’ll be using two other servers, which will serve as worker nodes, to join the cluster. The good news is that I’ll be following the same steps, so I’ll use my script on the first worker node, which we’ll refer to as “ worker1 “ :

And I’m going to do the same on “ worker2 “ as well .

As a final step, I’ll copy the /etc/Kubernetes/admin.conf to both worker nodes, and then we’ll proceed to join the cluster :

Run the same command on each node and we are ready :

At this point, the cluster only consists of the control node. I’ll generate a token and simply paste the command on both worker nodes to complete the process .

Give them some time to become in a Ready status et Voilà :

The two worker nodes have joined successfully, and our cluster is now fully operational.

I hope you enjoyed this demo! You can now create your own Kubernetes ecosystem on your servers .

This marks the end of our first article in this section. Next time , we’ll dive deeper into Kubernetes and explore various categories of pods and volumes .

If you encounter any issues during the installation, don’t hesitate to reach out . Thank you for reading , please don’t forget to like and share and I’ll see you in the next one .

--

--

Meher Askri
Meher Askri

Written by Meher Askri

Linux System Engineer || RHCA

No responses yet