How to configure Centralized rsyslog server using ansible
Hello and welcome ,
In this article , We’re going to see how to configure a centralized rsyslogd server using ansible but first let’s take a quick overview of the system logging in Linux .
How System logging work ?
Most services used on a Linux server write information to log files , this info can be written to different destinations and there are multiple solutions to find those information in system logs . In fact there are three different approaches that can be used by a service to write them .
We have what we call it “Direct-write “ , this mean that the service write it’s logging information directly to it’s own logs like apache and samba for example .
We have also the “systemd-journald” , this service has been introduced with the introduction of Systemd ,which make it tightly integrated to it , it simply allow admins to read detailed information from the journal while monitoring service status .
And finally , we have the rsyslog service which is an enhancement of syslog ( has been around for long time), this service takes care of managing centralized log files , which what interest us in today .
I’m not going to dig in the details between journald and rsyslog and how journald can be integrated into rsyslog, just keep in mind that journald is not a replacement for rsyslog because of the remote logging option ( which what I’m going to explain next ) .
Now, that we know what is rsyslog , What makes it still relevant in big environment ?
To answer this question , let’s imagine for example we have dozens of servers and other network devices ( switches , routers …) , it would be annoying to log in to each server to see the logs every time we troubleshoot a problem. Instead , we can define one server as a centralized log server and configure all the others to forward logs to that server . this what we call remote logging and this why we still need to use rsyslog .
So, in this demonstration , I’m going to configure a centralized rsysolg server ( let’s call it Central ) and I’ll configure another two servers as clients ( let’s call them node1 and node2) so that they will forward logs to that central server , but I’m not going to do it manually , I’ll do with ansible , Isn’t it cool ?
Ok , let’s jump to the terminal and create a directory for this project ( I’ll call it rsyslog_test) and let’s create all the ansible components including the config file and the inventory file . It’s pretty basic ( If you want to know more about ansible and how to install it , I’ve already written an article that explained the steps on how to install it , check it if you want on this link ) .
The localhost will be the centralized rsyslog server , that’s why I only added node1 and node2 in the inventory .
And as a final step, I need to generate an ssh key and copy it to both nodes , to ensure that we have SSH key-based authentication ( this actually how ansible communicate with the managed nodes via SSH) without password .
Let’s start by node1 :
And the same for node2 too , let’s test it using ping pong 😄 .
Cool, Now that we’ve step up ansible basics , let’s create our playbook :
We start by defining the name of the playbook ( I’ll call it rsyslog project) , then we specify the host ( on which host it should run which will be our localhost in this example since we’re configuring the centralized rsyslog ) , and next we start our tasks .
The first task we’re going to do , is to enable the reception, in other word which protocol and which port the rsyslog will listen on it . In fact , we have the choice between UDP and TCP , I’ll chose TCP because it guarantees reception without losing any packets.
To achieve that , I used the “ line_in_file “ module to simply uncomment the two lines on the configuration file ( /etc/rsyslog.conf) , since I’m modifying two lines, I need to iterate to reduce the length of the playbook , so I used a loop ( which I’ve explained before on many occasions ) .
The next task , is to define a template , why is that ?
Because template allow us to specify what , which and where the logs should be logged . To understand that , I need to explain the logs syntax in Linux .
In Linux logs defined by three categories : facility , priority and destination .
facility simply means what should be logged , for example auth messages , kernel messages , daemon messages …. (check the rsyslog.conf man page for more details ) .
Priority means the severity of the message that needs to be logged , we have 8 priorities ( numerically from 0–7 ) ranging from debug … to emerg ( again check the man page for more details ) .
And finally we have the destination , which is where those messages should be written , typically destinations are files .
Ok , so here I defined a template called “meher” ( as my name 😁 ) . The logs will be written under my logs directory , with a subdirectory for each server’s hostname ( if it comes from node1 , there will be a node1 directory , if it’s from node2 , there will be a node2 directory) . The syslog-severity-text means the severity from the message in text format not in numeric numbers. For example if I have an emerg log from node2 , I expect to find it like this : “ /mylogs/node2/alert.log “ .
Of course there are better way to adjust the syntax , we can add the time , the facility , the name of the app …. , it’s up to you to be creative 😄 ( you can find all the details on the man page rsyslog.conf under the template section ) .
the next line “ *.* “ , means we log everything with all priority , the (-) for syncing the log file after every logging event and then we call our template by name “ ?meher “ .
And at the end , the task end with a notify statement which will use a “restart _rsyslog” handler . You might wonder, what is a handler?
A handler is a conditional task that is triggered an executed by a successful task . In other word it would be executed after the template is created .
The next two tasks , is a to create that directory ( mylogs) using the file module and to open the port 514 on firewall to allow incoming packets on that port ( remember in the beginning I told you, we have the choice between UDP and TCP both supported and we chose TCP , that’s why I open the port with TCP protocol ) .
Finally as you can see the handlers , which is a simple task that’s going to restart the service , of course in Linux we need to restart the service every time we change the configuration so the changes take effect .
Now let’s move to the second part , which is the configuration of the clients to send their logs to the centralized server .
So, we’re going to use a second play ( since it’s going to be executed on different hosts ) , I’m using the jinja2 template ( I explain it before in more details ) , we’re going to create a remote.conf file in the destination /etc/rsyslog.d/ .
the template file contain one line : “ *.error @@10.1.1.100:514 “ , this is means that we will forward all logs with error priority or above to the IP add ( 10.1.1.100) of the central log server using TCP ( ‘@@’ , if it were UDP it would be ‘@’) on port 514 . it is important that both protocols match on both the sender and the receiver side.
Finally , let’s end it with a test message ( it’s funny but don’t use it on a production environment 😁) using the syslogger module which will send a message with the priority of alert and the facility of auth .
Based on what we did so far , we should find that message on an alert.log file under a hostname subdirectory within mylogs directory on the centralized log server (localhost).
Let’s run the playbook and see what happens but before I need to put Selinux in permissive mode because he will prevent the service from writing to different location by default ( that’s another topic for another day ) .
Well, it looks like everything went well, , let’s check inside the “ mylogs” directory :
Et voila, as you can see inside the mylogs directory , we have 3 directories for the three servers and as you can see we found the alert message: “Unknown log in” , which we sent earlier using the ‘syslogger’ module on both nodes. Notice that the format is severity.log as we specified in the template and notice that we only see logs from error severity and above, based on what we configure those servers to send .
I hope you enjoyed this demonstration today. If you have further questions, feel free to leave them in the comments below .
Thank you for reading. Please don’t forget to like and share, and I’ll see you in the next one .