Added new post 2024-09-03
All checks were successful
ci/woodpecker/push/build Pipeline was successful
All checks were successful
ci/woodpecker/push/build Pipeline was successful
This commit is contained in:
18
content/posts/disappearance-of-my-plex-server.md
Normal file
18
content/posts/disappearance-of-my-plex-server.md
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
+++
|
||||||
|
title = 'Disappearance of my Plex Server'
|
||||||
|
date = 2024-09-03T19:00:00-05:00
|
||||||
|
tags = ["plex", "incident_report", "unraid"]
|
||||||
|
+++
|
||||||
|
|
||||||
|
In order to celebrate the long weekend, I was going to take it easy with my Homelab tasks (hence the lack of posts over the couple of days) and just watch some shows I've got stored on my Plex. I logged on to my Plex, but my libraries are nowhere to be found. This sent me into a bit of a panic as I just spent CAD $2.5k to ensure I'd have some redundancy on my media files. Of course, they could easily be redownloaded but it would be such a pain in the ass that I'd rather not.
|
||||||
|
|
||||||
|
To my relief, all of the media files are all there. But when I checked the Docker volume where Plex is supposed to store all of its Library metadata, it's all empty. There's only one culprit to this: I've hit the apply all updates button on unRAID just a few days prior. All of the healthchecks failed to catch this particular incident, as the server is up. It's just that the configurations its supposed to have is gone.
|
||||||
|
|
||||||
|
To this date, I am still stumped as to why on earth a simple update would wipe the Library metadata folder. I have never had this happened before when I was hosting it via Portainer. And the configurations I have on unRAID is identical to what I have previously. All of those Library metadata should be stored and persisted in the cache pool.
|
||||||
|
|
||||||
|
Thankfully, I still have the backup folder from when I was moving over the metadata from my decommissioned OMV server over to my new one. It was missing some settings (which is worthy of a post of its own later), but it still contains a good chunk of the metadata I'd rather not configure from scratch again.
|
||||||
|
|
||||||
|
Downside is that I had taken this backup from a just over a month ago, so all of the data recorded since then have been lost to the void. Not a big deal in the grand scheme of things, but is definitely unacceptable if I want to open this server up for more friends and family to use.
|
||||||
|
|
||||||
|
|
||||||
|
So just a cautionary tale for me and to any of you reading: Take backups before doing updates! Especially on unRAID which I'm unfamiliar with how it manages its Docker containers and updates. I might very well just have to setup a cron job that will periodically zip up this Library folder and store it somewhere.
|
||||||
106
content/posts/soha-2024-part-2.md
Normal file
106
content/posts/soha-2024-part-2.md
Normal file
@ -0,0 +1,106 @@
|
|||||||
|
+++
|
||||||
|
title = 'State of the Homelab Address - Part 2'
|
||||||
|
date = 2024-09-03T19:00:00-05:00
|
||||||
|
tags = ["homelab", "soha"]
|
||||||
|
+++
|
||||||
|
|
||||||
|
## Main Homelab Infrastructure
|
||||||
|
|
||||||
|
Picking up from the last part, I'd like to go more in depth into how my main homelab infrastructure is setup. As mentioned before, I have 2 x IBM x3650 M3 servers, each have Dual Intel X5660 CPU and 96 GB. I've decided to use Proxmox as my main baremetal OS for both these servers, after watching and following [this](https://www.youtube.com/playlist?list=PLT98CRl2KxKHnlbYhtABg6cF50bYa8Ulo) series by LearnLinuxTV.
|
||||||
|
|
||||||
|
I've been actively using this setup for over 2 months now if I'm not mistaken and it's been nothing but delightful for me to use. Even without the high availability part of the cluster setup available (as that requires an odd number of nodes), the ability to migrate machines from one node to another is already a step up from OMV and even unRAID's virtualization platform. More than anything, having everything be managed under a single pane of glass is what I'm after.
|
||||||
|
|
||||||
|
For my VMs, the world's my oyster. But I've decided to stick with what I know best: Debian. It's been my tool of choice ever since college when I first seriously got into Linux, and thankfully with cloud-init it makes spinning up new servers dead easy to do.
|
||||||
|
|
||||||
|
### The NAS Server
|
||||||
|
|
||||||
|
Since high school, ever since I've become aware of the existence of this magical box called a NAS server where I can keep my data centrally stored and secure with redundancy, I've been meaning to get my hands on one. Sadly, both the hardware and the hard drives are out of my budget for the longest time. That ended this summer when I had the financial breathing room to build myself a sick rig that would store all of my totally legally-acquired media.
|
||||||
|
|
||||||
|
On the hardware side, I have the following build:
|
||||||
|
```
|
||||||
|
CPU: Intel Core i5-12400
|
||||||
|
Motherboard: MSI PRO B760 MA
|
||||||
|
RAM: Corsair Vengeance 2 x 64 GB
|
||||||
|
Storage: 2 x 1TB WD Black SN770 (Cache Pool), 3 x 16TB Seagate IronWolf Pro
|
||||||
|
Case: Fractal Design Node 804
|
||||||
|
PSU: Cooler Master V850 Modular ATX
|
||||||
|
+ a bunch of Arctic Cooling PWM fans
|
||||||
|
```
|
||||||
|
|
||||||
|
For the software side, I've bought an unRAID Pro license a few months back when I heard that they were going to change their licensing model. At first, I was torn between it and TrueNAS. Ultimately, I leaned towards unRAID just because of how flexible you can be with how much storage you can throw at it up front. From my understanding, unRAID will accept different sized drive, with the caveat that it'll only accept drives that are as large as its parity drive. Whereas with TrueNAS leveraging ZFS, you kind have to pre-plan how many you'd want to throw up front as expansion is not as flexible as unRAID.
|
||||||
|
|
||||||
|
While I did have to pay for unRAID, the benefits are still there for me. Right now it's acting as the backup target of both my laptop, my mostly inactive PC, and several of my VMs with the help of Proxmox Backup Server.
|
||||||
|
|
||||||
|
Speaking of which, that is another advantage that Proxmox offers. With the Proxmox Backup Servers, backing up virtual machines has never been more trivial. I run PBS as a VM on unRAID as it needs direct access to the storage pool, but otherwise linking it to the Proxmox cluster is dead easy using [this](https://forum.proxmox.com/threads/how-to-setup-pbs-as-a-vm-in-unraid-and-uses-virtiofs-to-passthrough-shares.120271/) helpful tutorial as my guide.
|
||||||
|
|
||||||
|
As one of the many hat it wears, it also acts as my media server where I run a pretty big Plex server. As soon as I get onto my document archiving project and setting up roaming profiles with Active Directory, it'll wear another hat as being the main file server and will play an even more crucial role into my lab than what it currently is.
|
||||||
|
|
||||||
|
### My Docker setup
|
||||||
|
|
||||||
|
Sitting on top of that layer is another tool of my choice for deploying applications: Docker. I did weigh using LXC instead of Docker, as it's also something that Proxmox supports by default, but in the end I decided against it for the following reasons:
|
||||||
|
- I wanted kernel separation between my baremetal servers and the guest OS. I've had instance before when I was using Hetzner VMs (which I could not replicate for the life of me) where I ran into kernel panics when trying to tinker around with Postgres. I wanted to avoid the possibility of me bringing down my Proxmox node because of that scenario
|
||||||
|
- I'm concerned about update management. I'm simply more familiar with that using Docker, though I suspect once I get into something like Ansible I can automate that part of it too.
|
||||||
|
- I am not familiar of an easy way to manage LXC containers the same way I am with Docker.
|
||||||
|
|
||||||
|
For that last point, it might be better to just talk about the way I manage them. Currently, I have more or less 15 servers, all of them running Docker and have various containers running on them. I use Portainer to centrally manage all of the containers across my lab.
|
||||||
|
|
||||||
|
At first, in order to challenge myself to really get familiarized with Docker, I opted to not use it and instead do the old fashioned way of deploying docker using CLI. But after a handful of instances where I've had to restart containers for a variety of reasons, whether it be after an update or when I wanted to change a configuration, typing all of it out has definitely became a chore. This could've easily been solved had I just leveraged a docker-compose file, but just as with Proxmox sometimes it's just easy to do it all under a single pane of glass. In addition, even Portainer's community edition offers a pretty robust API endpoint that I can use to automate certain tasks that I do on the GUI.
|
||||||
|
|
||||||
|
Ultimately, all of these are stop gap measures until I can sit down and properly learn Kubernetes. While the current setup is serving me well, it does fall short of one of my main end goal with my lab which is high availability. If I had to take down a VM for whatever reason, all of the containers running on top of there goes down as well. And while I have rolled my own version of high availability feature, there are still some gaps missing there. For the time being, Portainer serves my use case well and I'll continue using it for the foreseable future.
|
||||||
|
|
||||||
|
### My services
|
||||||
|
|
||||||
|
At the time of this writing, I'm currently running 15 services and counting. Some of them have been spotlighted in my [about me](../../about-me.md) page, but I figure I should shine a bit more spotlight on these two that I've been getting heavy mileage out of:
|
||||||
|
|
||||||
|
#### PiHole
|
||||||
|
My DNS server of choice. It uses Unbound as its upstream DNS (also running in a container), and in turn that uses Cloudflare as its upstream DNS. Like many, I've initially set this up with the goal of blocking ads. However, it seems like I was running into more issues with it than what I hoped for, to the point where any benefits I get out of it has been outweighed by the disadvantages it brings.
|
||||||
|
|
||||||
|
One funny (and also frustrating) instance of it was when I was raging when I couldn't get past the reCaptcha screen whil etrying to get into my Fido account. After a couple more minutes of cursing and bashing my head against the wall, it finally clicked to me to try pausing the ad blocking and try again, and lo and behold it works!
|
||||||
|
|
||||||
|
That, and the fact that the ads are getting through the filters anyway even with a pretty aggressive blocklist, has made me decide to just keep using it as a DNS server. Unfortunately, Ubiquiti (to this date) still does not have a robust DNS servers one would expect from a company that has made it their business model to sell networking equipment to prosumers. Guess they'd rather sell EV chargers!
|
||||||
|
|
||||||
|
#### Nginx Proxy Manager
|
||||||
|
My reverse proxy of choice. Gone are the days where I navigate to my applications using a hodge podge of IP addresses, port numbers and red lock signs indicating the site is insecure. Assigning a friendly name to an application and bindign a certificate for it has never been easier!
|
||||||
|
|
||||||
|
At first, I was using Caddy. And it was working just fine for the brief period I was using it. But in the end, it's just nice to have a GUI to visualize what I'm trying to do.
|
||||||
|
|
||||||
|
## Current Issues
|
||||||
|
|
||||||
|
This current iteration of my homelab has definitely been way more of an improvement than its previous incarnations. But even then, there's only so much I can accomplish within the past few weeks that I've been chipping away at it. Below are some of the issues I currently have, and that I hope to tackle over the next year:
|
||||||
|
|
||||||
|
### Memory Overhead
|
||||||
|
|
||||||
|
As mentioned before, my services are currently scattered across 15 or so VMs due to the decisions I made in terms of separations of concerns and high availability. And while it's currently not an issue due to each Proxmox nodes having 96 GB each, I am finding that it is consuming way more memory (currently at 45GB) than what it should be consuming given the very light workload it currently carries.
|
||||||
|
|
||||||
|
I'm afraid this issue will become more apparent as I introduce more VMs into the mix. So my current gameplan is:
|
||||||
|
- **Consolidate database backends into one VM** - Currently, I run MariaDB, InfluxDB, Loki and Postgres into their own containers. Each one of them consumes 1/4 of the memory they're given, but the majority of that is not even consumed by the database containers themselves. I reckon I can claw back more RAM if I bunched them all up into a single VM, and I might not even have to double what I currently have for them (3 GB)!
|
||||||
|
- **Consolidate gitea and woodpecker into one VM** - same logic as the one above, and since Woodpecker couldn't really operate without Gitea, as it uses the latter for authentication, there's really no point in keeping the two separate.
|
||||||
|
- **Fold Portainer master host into existing applications VM** - same logic as the previous two. Portainer master host is in its own VM, with all of the other VMs running Docker agents. I don't see much point in keeping it as a stand-alone VM, so I just fold it in with the existing VM running all of the other applications
|
||||||
|
|
||||||
|
### Lack of DMZ
|
||||||
|
|
||||||
|
This is one of my higher priority projects. Currently, I'd say I'm pretty satisfied with limiting external exposure to my sites. Most of my services are limited to internal-only access via NPM, and the services that can be accessed extenrally, I am running through Cloudflare proxy.
|
||||||
|
|
||||||
|
However, as a popular metaphor goes in the industry, security should be treated as an onion. The more layers there is, the better. Introducing a DMZ, where publicly-facing services runs and have very tight firewall rules around it, would certainly not hurt in comparison to my current setup right now.
|
||||||
|
|
||||||
|
### Lack of autoupdates
|
||||||
|
|
||||||
|
Currently, short of unRAID and certain applications telling me there is an update available, I have no way of knowing when to update nor to automate them. Ideally, I would want to get into a state where I have the ability to approve and autorun updates from a centrally manage location, with an option of enabling auto-updates should I wish to.
|
||||||
|
|
||||||
|
### Lack of security detection
|
||||||
|
|
||||||
|
I currently lean on my UXG-Max and Cloudflare to a certain extent to do this for me. But again, it would certainly not hurt having this around for me to get a heads up if there's any funny business going on under my nose
|
||||||
|
|
||||||
|
## Future Goals
|
||||||
|
|
||||||
|
In addition to addressing all of the current issues I have in mind, I have the following I'd like to do in the coming months too:
|
||||||
|
|
||||||
|
- **10 Gbps Network Upgrade** - would help facilitate faster backup for Proxmox Nodes into the NAS server, and faster migration too even without the shared storage configured
|
||||||
|
- **Kubernetes** - not only would this aid in my quest to make my services highly available, I think it's just a good one to learn and get better on career wise
|
||||||
|
- **Single Sign Ons** - would be nice to have, especially with 2FA support. But honestly, my KeePass is getting me by just fine that I don't necessarily have the urge to get this one pushed, especially in the light of all of the other things I have going on
|
||||||
|
- **Introduction of Windows Servers** - also a nice to have, especially with Active Directory and GPOs. It's always been something that I've dabbled into from time to time, but never had the chance to intimately learn myself.
|
||||||
|
- **Emergency Site** - another dream of mine is to be able to spawn even just a portion of my services off site, but this is a mega project onto by itself and can easily consume a year of my life doing.
|
||||||
|
|
||||||
|
## Closer
|
||||||
|
|
||||||
|
This concludes this year's State of the Homelab address! I hope to be able to provide another huge update next year, with some of my big To-Dos knocked out of the list.
|
||||||
@ -4,26 +4,6 @@ date = 2024-08-28T22:59:00-05:00
|
|||||||
tags = ["homelab", "soha"]
|
tags = ["homelab", "soha"]
|
||||||
+++
|
+++
|
||||||
|
|
||||||
|
|
||||||
<!-- Sections
|
|
||||||
- History
|
|
||||||
- Current Setup
|
|
||||||
- Hardware + Baremetal OS
|
|
||||||
- Service stack setup
|
|
||||||
- How separation is decided
|
|
||||||
- Portainer and why not k8s
|
|
||||||
- Current Issues
|
|
||||||
- Memory overhead due to separation
|
|
||||||
- Lack of DMZ
|
|
||||||
- Lack of autoupdates
|
|
||||||
- Lack of security detection
|
|
||||||
- Future Goals
|
|
||||||
- 10 Gbps
|
|
||||||
- k8s
|
|
||||||
- SSOs
|
|
||||||
- Windows Server
|
|
||||||
- Emergency Site -->
|
|
||||||
|
|
||||||
As mentioned before, I've been intending to give an overview of my current Homelab environment. So I've decided to start a new series on this blog named the State of the Homelab Address. Just like its namesake, I'd like to have this series include the progress that's been made since last year and future goals I'd like to achieve for the next. Ideally, I'd have this written and released around the end of the year but if I put if off any further, any mental notes I might have are at risk of being lost in the void so I want to get this written down immediately.
|
As mentioned before, I've been intending to give an overview of my current Homelab environment. So I've decided to start a new series on this blog named the State of the Homelab Address. Just like its namesake, I'd like to have this series include the progress that's been made since last year and future goals I'd like to achieve for the next. Ideally, I'd have this written and released around the end of the year but if I put if off any further, any mental notes I might have are at risk of being lost in the void so I want to get this written down immediately.
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user