I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.
Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?
Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.
Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?
I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.
+1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅
Same same - just one update a week on Friday btw 2 yawns of the 4VMs and 10-15 services i have + quarterly backup. Does not involve much + the odd ad-hoc re-linking the reverse proxy when containers switch ips on the docker network when the VM restarts/resets
Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.
Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.
A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.
It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.
TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.
sometimes I remember I’m self hosting things
As long as you remember before you turn off the computer!
I don’t understand. “Turn… off?”
neofetch proudly displaying 5 months of uptime
my main PC hosts nothing, everything else is always on
+1 automate your backup rolling, setup your monitoring and alerting and then ignore everything until something actually goes wrong. I touch my lab a handful of times a year when it’s time for major updates, otherwise it basically runs itself.
Huge amounts of daily maintenance because I lack self control and keep changing things that were previously working.
highly recommend doing infrastructure-as-code, it makes it really easy to git commit and save a previously working state, so you can backtrack when something goes wrong
Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.
Sorry I replied to the parent comment, but check out Ansible
Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd
I get it, the inventory is just a list of all servers and PC you are trying to manage and the playbooks contain every step you would take if you would configure everything manually.
I’ll be honest when you first set it up it’s daunting but that’s the thing! You only need to do it once, then you can deploy and redeploy anything you have in minutes.
Ansible is great for this!
I have weekly backups of my VMs in Proxmox. Fuck it lol.
Nightly backups to a repurposed qnap running pbs. I’m fully aware it’s overkill but it gives me some peace of mind.
I opted weekly so I could store longer time periods. If I want to go a month back I just need 4 instead of 30. At least that was the main Idea. I’ve definitely realized I fucked something up weeks ago without noticing before lol.
I’ve got PBS setup to keep 7 daily backups and 4 weekly backups. I used to have it retaining multiple monthly backups but realized I never need those and since I sync my backups volume to B2 it was costing me $$.
What I need to do is shop around for a storage VM in the cloud that I could install PBS on. Then I could have more granular control over what’s synced instead the current all-or-nothing approach. I just don’t think I’m going to find something that comes in at B2 pricing and reliability.
Once setup correctly, almost none.
I could spend a lifetime setting up my self hosted stuff correctly.
True, didn’t say that it didn’t take me an eternity to set it up
Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.
For some reason my DNS tends to break the most. I have to reinstall my Pi-hole semi-regularly.
NixOS plus Docker is my preferred setup for hosting applications. Sometime it is a pain to get running but once it does it tends to run. If a container doesn’t work, restart it. If the OS doesn’t work, roll it back.
I have just been round my small setup and run an OS update, took about an hour. That includes a reboot of a dedicated server with OVH.
a pi and mini PC at home, a dedi at OVH running 2 LXC and 5 qemu vms. All deb a mix of 11 and 12.
I spend Wednesday evenings checking what updates need installing, I get an email every week from newreleases.io with software updates and run Semaphore to check on OS updates.
If my ISP didn’t constantly break my network from their side, I’d have effectively no downtime and nearly zero maintenance. I don’t live on the bleeding edge and I don’t do anything particularly experimental and most of my containers are as minimal as possible
I built my own x86 router with OpnSense Proxmox hypervisor Cheapo WiFi AP Thinkcentre NAS (just 1 drive, debian with Samba) Containers: Tor relay, gonic, corrade, owot, apache, backups, dns, owncast
All of this just works if I leave it alone
New Lemmy Post: How much maintenance do you find your self-hosting involves? (https://lemmyverse.link/lemmy.world/post/14656240)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
For my local media server? Practically none. Maybe restart the system once a month if it starts getting slow. Clear the cache, etc.
When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.
When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.
Was that a mix of games being more involved and the way their server software was set up, from what you could tell, or…?
A bit of both. It really depends on the game. Some games are super simple, just launch an executable and hand out the IP. Others are needlessly complicated or just horribly coded. My example game is just an absolute mess all around even just as a player; running a server is no different. And since the actual game is all user-made, sometimes the problem is the server software, and sometimes it’s how the mission you’re running was coded. Sometimes it’s both.
deleted by creator
30 docker stacks
5mins a day involving updates and checking github for release notes
15 minutes a day “acquiring” stuff for the server