Files
MyPlaygroundBlog/content/posts/tightening-network-security.md
Carl Tibule 75796275f2
All checks were successful
ci/woodpecker/push/build Pipeline was successful
Added new post 2024-09-12
2024-09-12 21:22:28 -05:00

64 lines
8.2 KiB
Markdown

+++
title = 'Creating a DMZ'
date = 2024-09-12T20:00:00-05:00
tags = [ "cloudflare", "npm", "security", "unifi", "proxmox", "unraid"]
+++
I've finally decided to address one of the projects that I've set out to do in [Part 2](./soha-2024-part-2.md) of my inaugural homelab report. I have been wanting to establish a DMZ for the longest period of time, but I've been intimidated because of the lack of my knowledge about both networking and security. For the uninitiated: A DMZ (Demilitarized Zone) is a network that hosts public-facing services on your network and has rules applied to it to limit or completely cut off its communication with the rest of the internal network. If implemented properly, any sort of breach in the network would be limited to that network only, without it ever spreading to the rest of your network.
Previously, all of my services are in a single network I had named Infrastructure. I had my port forwarding set to forward any external requests to port 80 and 443 to my Nginx Proxy Manager. Within NPM, I had created an Access List for private IP ranges and limited all "internal" services to that access list only. In theory, it should keep out outside connections to those internal services. However, there is only one layer of security between the outside world and my network.
## Creating the DMZ
One of the first order of business was creating the DMZ, which I referred to as "External Services". Thankfully, UniFi makes creating the network and firewall rules easy for networking novices like me. After creating the network and picking a VLAN Id, I made sure that the physical switch ports where the VMs connect to the rest of the networks can support the VLAN I just created.
Next, I booted up one of the two Proxmox hosts that I had powered off and migrated all VMs that will be on the External Networks there, which thankfully was not a lot. Looking back at it, there was no real need to migrate it off where it was living. But since at the time I did not really understand the implications of making the VM bridge network "VLAN aware", I wanted to limit my losses in case it goes sideways.
The procedure went as follows:
- On Proxmox VE, select the node and go to System -> Network. Select the bridge network where VMs communicate and click Edit
- Check off the box "VLAN" aware, saved and applied the configuration
- Shutdown a VM hosting an external service and migrate it to the aforementioned Proxmox host
- After the migration's completed, go to Hardware and select the network device it uses to connect to the VLAN-aware bridge network and click Edit. Then apply the new network's VLAN Id
- If Cloud-init is still attached, edit the IP address there and assign a new IP address to the machine from the External Services network.
- Start the VM. If Cloud-init wasn't attached to the VM, apply a new IP address manually by editing `/etc/network/interfaces`
- Make sure the new VM is pingable, and update relevant records on DNS
One of the key decisions that I made during this entire process was the creation of a new reverse proxy that will live on the External Services Network and will serve as a gateway to external services. I spun up a new VM on the new external services network and setup a fresh install of Nginx Proxy Manager on there. As I migrate services onto the new network, I've modified the DNS records to point its friendly name to the external services reverse proxy server.
### Hiccups with unRAID
For the most part, doing this work on Proxmox have been solid. However, I dreaded having to do the same work on unRAID. Going in, I believe that it does not offer the same smooth process when it comes to VLANs as Proxmox. And it didn't. I followed the following steps below after reading a few posts on the unRAID forum:
- Shutdown docker containers and VMS, then disable Docker and Virtual Machine features on unRAID
- On Network Settings, select the interface you want to be VLAN aware, and change "Enable VLANs" to Yes
- Add in the new VLANs, the IPv4 address should be optional but I decided to give it a dedicated IP address anyway. Apply the changes
- Go to Docker settings, and add in a custom interface, which was created from previous step. No need to apply Gateway and DHCP pool
- Change "Enable Docker" to Yes, and hit apply. Docker feature should now be reenabled
- Enable Virtual Machine feature. I have not yet had to apply VLANs to virtual machines, and I'd like to avoid doing so if I can help it. I will make a separate post on how to do so if I ever find myself needing to
- On each of the docker containers that will need to be moved to the external services network, click "Edit" and change the Network Type to the newly-created interface. Add in a static IP address and run the container. It should now be accessible via the static IP
Of course, as I expected, things did not go smoothly as I described. In between the configurations, I've ran into issues such as the web interface suddenly being down and services being inaccessible. At one point, after creating the VLAN, UniFi was flagging the machine as having the same IP address as the other unRAID Port that was not touched at all. If there was any regret during this entire thing, it's that I had not thoroughly documented this process at all and the steps I took to resolve it. To this date, I don't know how I resolved the double-IP address issue, nor why services that were inaccessible can suddenly now be accessed in the right network. I'm assuming the two are related, but the solution is now going to be one of the world's mysteries.
## Firewall Rules
After all of the services have been migrated, it's now high time to tighten the rules:
- Updated Port Forwarding rules to forward 443 to the external services reverse proxy server, and removed port 80 forwarding
- All LAN networks can access the external services network, only through its reverse proxy servers and specified ports
- Tip: Leveraging IP and Port Groups feature in UniFi simplifies this a bit, so you don't have to create a rule for every IP address + port number combo
- All private IP ranges can access the internal services network, only through its reverse proxy servers and specified ports
- Previously, this rule has the ports set to Any
- Allow the Portainer Master server access to all of the external services network, only on Port 9001. This is the port exposed by Portainer Agents, and allows us to continue using the existing master server to manage docker containers in the external services network
- Allow select VMs that need access to MariaDB into the Internal Services network, only on port 3306 (default MariaDB port)
- Ditto above, but for Postgres, and only on port 5432 (default Postgres Port)
- Allow all private IP address range access to DNS servers, which should allow us to continue leveraging existing Pihole cluster for interacting with the external services network
At the end of all the rules, a catch all rule preventing communication between private IP ranges should block any sort of interaction that has not been explicitly allowed
## Leveraging Cloudflare Proxy
Another change that I made was turning on proxying all of the DNS records I had on Cloudflare. Two reasons: it allows me to mask my private IP address, but most importantly it allowed me to only allow external connections to my reverse proxy via Cloudflare IP. If ever I suffer a DDoS attack, I could leverage Cloudflare to help guard against it. Not that I'd rather test it.
On the external services NPM, I created a new Access List that contains all of the private IP ranges + Cloudflare public IPs, which should be available on their site. I then updated all of the entries in my reverse proxy to use that access list.
To test, I created a dummy DNS record on Cloudflare with proxy turned off. In theory, this should still let me hit the external reverse proxy with a non-Cloudflare IP. The result is...inconclusive? I was expecting to see a page from Nginx about my request being blocked or something, but instead all I'm seeing is an error message: "A server with the specified hostname could not be found". I don't think this is what I should be seeing, so the diggings will continue.
For the time being, I'm pretty happy with how it turned out, and I'm feeling more secure about my homelab setup than I have prior to this project.