Configure NFS Server and Client using Ansible
Hello and welcome,
In this article , we are going to see how to configure an NFS server/client using ansible , but before we start let’s take a quick overview of NFS .
What is NFS ?
NFS is one of the most famous services in Linux , the idea behind it , it’s simple :
It is a distributed file system protocol that allows a computer on a network to access files over a network as if they were on its own local hard drive. In other words it is a great way to share files across Linux Systems.
And like any service , we have two parts , the client part and the server part :
In this demo , I’m using two virtual machines , one as an NFS client (node1) and the other as an NFS server (node2) .
Setup Ansible
The first thing we need to do is to configure the Ansible config file and the inventory.
Let’s start by creating a project directory and naming it “nfs_project “Inside the project directory, create a file named “ ansible.cfg “ with the following basic and simple configuration :
Next, let’s create an inventory file called “ nfs_inventory “ and add the managed servers to it. You can use IP addresses if you want (I have already configured the IP addresses to be resolved through /etc/hosts ).
In my example, node1 is the NFS client, and node2 is the NFS server.
And finally, create a file that contains the SSH password and ensure to set the correct file path in the “ ansible.cfg “ file. In this example, I’m using “root “as the password for the root user login.
I hope everything is clear so far. Now, let’s perform a ping test to ensure everything is functioning correctly. As you can see, the managed nodes have responded successfully.
Creating NFS Playbook
Now, let’s begin creating our playbook. I’ll name it “ nfs_playbook.yml “, but feel free to choose any name you prefer.
In this playbook, we have two plays. The first play will be executed on node2, which is the NFS server in this example and the second play will be executed on node1, which is the NFS client.
Please ensure to replace “node2” and “node1” with the actual host names or IP addresses of your NFS server and client accordingly .
Explanation of the Playbook
Let me explain the main playbook, which I named “nfs_playbook.yml”. The main playbook consists of two plays, each containing an included task.
The first play is executed on the node2 (NFS_Server), while the second play is executed on the node1 (NFS_Client). Additionally, I want to mention the handlers.
Handlers are designed to perform an additional action when a task makes a change. Think of them as extensions to regular tasks.
In this case, the first handler will start and enable the NFS service, and the second handler will reload the firewalld service.
Now,let me explain what is inside the tasks directory ( you’ll find all of them in my GitHub repository).
The directory contains two included tasks, which are called by the main playbook. The reason for separating the tasks into another file is to keep the main playbook more organized and manageable. By splitting the tasks into separate files, it helps to maintain a clear and concise structure.
Inside the “nfs_server.yml” file, you will find the tasks that are specifically intended to be executed on the node2. These tasks are related to the NFS server configuration and any other actions required for the node2.
The first two tasks are straightforward and easy to understand. The first task installs the necessary package, while the second task creates a directory that will be shared.
The third task involves copying specific files from the “/etc/” directory. It utilizes a regular expression, “[a-c]”, to match any file that starts with the letters “a”, “b”, or “c”. The wildcard “ * “ means that it doesn’t matter what comes after those letters. The task uses “with_fileglob” as a loop to iterate through all the files matching the pattern and execute the copy command accordingly.
The fourth task is simple. It adds the NFS share directory to the “/etc/exports” file with the appropriate permissions. This step ensures that the NFS share is properly configured for access.
Finally, the last task is responsible for opening the necessary ports on the firewall to allow access to the NFS share. This ensures that network traffic can flow freely between the NFS server and the clients.
These tasks collectively set up the NFS share, configure necessary permissions, and ensure proper network accessibility.
Now, let me show you the other file that contains the tasks to be executed on the NFS client.
It contains a simple task, which is to mount the NFS share on the “/mnt” directory and make it persistent in the “ /etc/fstab “ file.
Notice that I used the “ netdev “ mount option to ensure that the system doesn’t attempt to mount the NFS share before the network connectivity is fully available.
And that’s it , very simple and very clear to understand .Now, let’s proceed and run the playbook to see if any errors occur.
As you can see, the absence of failed tasks indicates that everything is functioning correctly. However, to ensure that we have achieved the desired outcome, let’s execute a final ad-hoc command to verify the presence and accessibility of those files on the mount point (“ /mnt “) of node1. Run the following command:
Voilà! The files are accessible on the mount share and finally, let’s execute another command to verify that the share is successfully mounted.
Run this command :
You can see that it is mounted successfully.
I hope you enjoyed this little project. If you have any further questions or need assistance, feel free to leave a comment below. Please don’t forget to share and like If you found this article useful.