A Step by Step guide to Configure NFS as a Backend Solution For K8s

Meher Askri
7 min readAug 2, 2024

--

Hello and welcome to this second article in this orchestration section . Today we’ll dive into one of the mandatory concept in Kubernetes which is storage . We’ll start by exploring the storage mechanism in K8s and how it works . I’ll guide you through a step-by-step demonstration on how to set an NFS share as a storage provisioner and use that provisioner by a Persistent Volume Claim (PVC) to automatically create Persistent Volume (PV) .While it may seem a bit complex, don’t worry , I’ll do my best to simplify the process for you .

How K8s storage mechanism work ?

In Kubernetes it’s all start with a pod , a pod is just a collection of two things ( containers + voulmes) . While, from a pod perspective it’s possible to go directly to the storage that we have on that node , but that would make the pod not portable at all .

To ensure pod manifest files works in different cluster environment , we need to decouple the storage from the pod so that we can launch the pod no matter on which node is running and in order to achieve that , we need a different solution which is Persistent Volumes .

How does this dynamic storage work?

The pod will request storage from the PVC ( short for persistent volume Claim ) which is an API object , the PVC ( is simply a demand for volume ) is going to check on the cluster and see if that specific storage is available , if there is a matching PV ( persistent volume ) available , it’s easy the PVC will be bound to the PV .

But, that’s still not flexible , because it requires ensuring that the PV is available every time . So, we need something more dynamic to automate the creation of the PV and In order for that , we need to use a SC (short for Storage Class), which is an API object designed to automate the creation of PVs .

So , the SC will automatically provision storage at the moment the PVC request coming in by using what we call a storage provisioner ( also known as provisioner ) which is a plugin that use CSI driver (container storage interface) to communicate with the external storage ( in our case it’s NFS) .

I know things seem a little bit complicated , So, to simplify think of PV as the high-level representation of external storage assets in Kubernetes, PVC is like a ticket that grants a Pod access to a PV and SC make it all dynamic using provisioner which is like the underlayer that will access the external storage .

So , let’s jump to the terminal :

As you can see , I’m logging into my cluster , The control node is at the top, and below are the two worker nodes (worker1 and worker2) .

I’ll start by installing the NFS service ( check out one of my previous article , to find out all about it) . In this demo, the control node will also serve as the NFS server too ( I won’t be using a separate server for that ) .

As you can see , the service is installed ( I installed it silently ) , let’s configure it real quickly :

So far, so good (I just forgot the “sudo” in the “systemctl” command, it’s not an auth error ).

Let’s install the client package on the worker nodes too ( In RHEL it’s the same package ) .

And As you can see both the worker nodes ( NFS clients ) can access the share , so everything is good .

Notice that what we’ve done up to this point is just pure Linux without any DevOps stuff ( I’ll leave a bash script below ) .

Now , let’s move on to the second part , which is the configuration of the storage Provisioner and to do that we need to install the plugin first using Helm ( which is the package manager for K8s ) .

Depending on your Kubernetes distribution, if Helm is not already installed, it’s easy to set up. Simply download the tarball, copy it to the control node, extract it, and then copy the Helm binary to “ /usr/local/bin “ .

OK , so let’s start by adding the Nfs-provisioner repository to helm ( you can find the project on GitHub under this link ) and then install it :

Okay, let me break down the different parameters of the “ helm install “command :

The first parameter is the release name (I’ve named it “nfs-provisioner”, you can choose any name you want). The second and third parameters refer to the chart repository and the specific chart name, then the IP add of the NFS server and the NFS path and finally I specify the StorageClassName for the provisioner storage .

Now that the provisioner and the StorageClassName have been successfully created, let’s proceed to create the Volume Claim (PVC), which will requests a Persistent Volume from the SC who will automatically provision it .

As you can see, the PVC will request a 1Gi of storage , the access mode is ReadWriteOnce ( means read/write by a single node at a time ) and the StorageClassName must be identical for successful binding .

And as you can see , the PV has been created automatically and bound successfully .

Finally, let’s create two pod with one container on each ( I’ll use busybox image) and each pod will use the PVC as the volume type (don’t worry , I’ll leave all those yaml files on my github repo at the end ).

But, before that I need to label the two worker nodes : worker1 with ( nfs-test1=true) and worker2 (nfs-test2=true) .

Then I’ll use Node Affinity in the pod specification , to assign each pod to one of the worker nodes based on the labels we created earlier . I’ll start by the first one :

Let’s apply it :

Note that if it’s the first time you’re using this image, the “ContainerCreating” status may take some time to pull the container image . Just wait and check again .

And I’ll create the second one to run on the worker2 node :

I just changed the name of the pod , the name of the key ,the name of the container and the name of the volume and the mountPath .

let’s apply it :

Great, both pods are up and running on different worker nodes. Now, let’s create a test file inside each container to confirm if they both share the storage .

And as you can see we can see the testfile2 of the second container from the first container perspective .

Now , the real question is where should these two files exist ?

Based on what we did so far, these files should be located on the backend server, which is theNFS server (the Control Node in this demo) and if you remember, we named the NFS share directory /nfstest .

So , if we look inside this directory , we should be able to find a sub directory of that persistent volume that has been created and according to the recipe 😄 , we will find our data ( these two testfiles ) inside it .

Let’s check :

Et voilà, both files are there on the NFS share along with the sub directory for the PV which serves as the backend storage for these two pods .

And that concludes it. I hope you enjoyed this demo and gained insights into configuring NFS as backend storage for Kubernetes .

Thank you for reading , I hope you found it helpful. Please don’t forget to like and share and I’ll see you on the next one .

--

--

Meher Askri
Meher Askri

Written by Meher Askri

Linux System Engineer || RHCA

No responses yet