Building a Secure Centralized YUM/DNF Repository with Apache: A Comprehensive Guide
Hi and Welcome again,
Today, we’re delving into one of the most renowned services in the Linux ecosystem: Apache and no, I’m not referring to the US helicopter 😄 .
Apache doesn’t need an introduction, it is one of the most popular web servers on the world. In fact, more than half of the web servers on the internet use Apache due to its simplicity .
So , let’s create a cool project :
In the Linux world , when you want to install a package you typically retrieve it from your distribution’s official repository such as Debian, Redhat, or CentOS . However in an enterprise environment , it won’t be a good idea to expose your internal servers to the internet , even if they are behind firewalls and you passing through a proxy server when you go out, still not secure in some cases (will talk about how to verify the integrity of packages and the checksum another day ).
Another option is to configure a local repository on each server, either by creating an ISO image and then mounting it, or by directly mounting the CD-ROM device . While this approach can be a good solution, it’s not without its drawbacks :
One of the primary concerns is the increased storage consumption . This method essentially means that each server is burdened with its own “bag of packages” , similar to the image , which can lead quickly to storage limitations , especially in environments with many servers and limited storage resources .
A better solution is to configure a centralized HTTP server (which could also be an FTP server) to serve as a central repository. This approach is highly advisable, particularly in enterprise environments. When a package installation is required, it is retrieved from this central server, which enhances security and simplifies management .
And that’s what I’m going to do today, I will guide you through the process of creating a local repository, installing Apache, and configuring it to serve from this directory while prioritizing security measures such as TLS, SELinux, and GPG keys. Additionally, to automate the deployment of the configuration across two clients, I will use Ansible .
Alright , Let’s begin with this project:
I have three nodes : node5 ( IP add= 10.1.1.9 ) will serve as the Apache centralized server repository, while node2 ( 10.1.1.5) and node4 (10.1.1.8) will be the clients . The client VMs consist of a mix of Red Hat and Rocky OS ( I’ll explain why this choice later ).
Here on my node5 , I’ve already created a bash script to automate the entire process ( I’m just going to explain it ) :
The script is actually explained line by line :
I started by mounting the disk ISO image , then created a tar archive and extracted it into a directory named “repo” ( which will be the directory that Apache serve from it ) , then I removed all default files with “.repo” extension on “/etc/yum.repos.d/” directory and created a new one named “local.repo “ .
Next, I created the content of the repository configuration file “local.repo” . As you may know, since RHEL 8 and above, we have two repositories (AppStream and BaseOS) . AppStream contains application packages and BaseOS contains system packages. While some people prefer to put them in separate files, it’s acceptable to include them together in one file under two sections . After that , the script installs Apache , mod_ssl ( Since the mod_ssl module is not installed by default with Apache on Red Hat-based systems ) and the createrepo utility to create yum repository .
Afterward, the script will utilize the createrepo
command to create a base XML repository directory at '/repo' and copy the official key . Subsequently the script will open ports 80 and 443 (HTTP and HTTPS) and configure Apache to serve files from the "/repo" directory by editing the DocumentRoot option, The second sed
command will modify the default directory to "/repo". Typically, the configuration settings for that directory .
But since there is the default one of the “/var/www/” directory , I just changed the the directory path parameter with Sed, more simple :
Next I created an ACL ( which is an exception to the standard file access permissions. ) , Normally to give a user access to file , either you make it the owner or a member of the group owner. However you can use ACL to give an additional access for specific user , In this case, I kept the root as the owner and gave Apache full access to the directory . Since Apache is not serving from its default directory, SELinux would prevent access by default and to resolve this, I changed the context type of the “/repo” directory from unrecognized to “httpd_sys_content_t”. ( SELinux is a complex topic that requires more explanation, so I’ll cover it in more detail in the future.
Next , I used the OpenSSL tool to generate a self-signed certificate and a private key. ( I employed the RSA cipher with a 2048-bit key length which considered strong and very difficult to decrypt ) . Normally , In an enterprise environment they generate a Certificate Signing Request (CSR) file and send it to a Certificate Authority (CA) for signing. It’s important to note that self-signed certificates are not trusted by default since they lack validation from a trusted CA . I will show you a trick at the end by injecting the certificate into the certificate store on the client VMs to make it look trusted ( Don’t do this in a production environment , It’s a violation of security policies ) .
After generating the private key, I named it “myrepo.key” , and the certificate “myrepo.crt” . Then, I edited the “ /etc/httpd/conf.d/ssl.conf “ file to specify the path to the key pair with sed command .
Lastly, I created an additional configuration file named “myrepo.conf” in the “/etc/httpd/conf.d/” directory. This file instructs Apache to redirect any incoming traffic on port 80 to HTTPS (port 443) .
Finally, I restarted the Apache server to apply the changes and unmounted the disk ISO image .
The script may take a while to execute due to the creation and extraction of the archive, so I’ll be back when it’s done.
As you can see, the script ran successfully, and I was able to install packages . So, let’s move on to the second part where I’m going to use Ansible :
I’ve already set up a directory (remote _repo_project) on my Ansible control node ,complete with the ansible.cfg configuration file and an inventory listing node2, node4, and node5 (where node5 serves as the central repository Apache server) . So , let me show you the main playbook ( which consists of two plays ) :
As I mentioned earlier, the self-signed certificate isn’t trusted because the system can’t find its signature in the trusted certificates store. That’s why Iused the fetch module to pull the certificate from the apache server (node5) to my Ansible control server and place it under the “/tmp/” directory . Next, I used the copy module to transfer it to the Trusted Certificate Store of the remote servers (node2 and node4) . Then I updated the the Trusted Certificate Store , and finally I used a template to copy the config file .
Before we proceed to run the playbook, let’s take a look at the template :
As you can see, the baseurl points to the node5 web server, and the gpgcheck is enabled with the URL of the GPG key pointing to node5 as well . So, let’s run the playbook and verify :
For node5, you’ll notice that the result shows ok instead of changed . This is because the file was already copied when I tested the task .
Now, let me login to these two servers (node2 and node4 ) first to show you something cool .
As you can see in my split screen , one server is running Rocky Linux 8.9 while the other is running Red Hat 8.4. I intentionally selected different distributions to illustrate that the underlying enterprise doesn’t matter as long as they are RPM-based .
let’s attempt to install a simple package like “nmap” and see if both servers can access the Apache server.
Et Voilà, now you have a centralized Apache-based yum/dnf repository that can serve all the servers in your environment .
And that’s it for today , Now you know how to build a local YUM/DNF repository with Apache. This will make your life easier, especially when dealing with intranet servers because managing dependencies without a repository can be a nightmare .
If you found this article helpful, please consider liking and sharing it. Thank you for taking the time to read, and I’ll see you in the next one .