I’ve been around selfhosting most of my life and have seen a variety of different setups and reasons for selfhosting. For myself, I don’t really self host as mant services for myself as I do infrastructure. I like to build out the things that are usually invisible to people. I host some stuff that’s relatively visible, but most of my time is spent building an over engineered backbone for all the services I could theoretically host. For instance, full domain authentication and oversight with kerberized network storage, and both internal and public DNS.
The actual services I host? Mail and vaultwarden, with a few (i.e. < 3) more to come.
I absolutely do not need the level of infrastructure I need, but I honestly prefer that to the majority of possible things I could host. That’s the fun stuff to me; the meat and potatoes. But I know some people do focus more on the actual useful services they can host, or on achieving specific things with their self hosting. What types of things do you host and why?
Public services: my social network(hubzilla), Email(mailcow), Matrix chat, Peertube.
Private: my media (jellyfin, audiobookshelf, calibre, homeassistant.
I enjoy the freedom that comes with this and its like having your own home on the internet. I have a very modest setup but its enough to host my friends and family so nothing fancy like k8s. Just a refurbished optiplex running docker :)
(How/) Do you access your private stuff from outside your home?
Nice until you’re at a hotspot that blocks most ports but the most common ones.
I use HTTPS for all stuff, that has given me the best results overall. But of course, you can offer multiple options simultaneously
(Preface: almost all of this is handled in a single Nix config, and no docker in use at all)
At home, in a two-hosts Proxmox cluster:
- blocky for adblocking
- a full *arr stack with torrents and nzbs for uuuuuuhhh Linux ISOs
- Jellyfin so friends and family can watch, I mean use the Linux ISOs
- Paperless (HIGHLY recommend)
- Wastebin (Pastebin alternative)
- Sterling-PDF (also really recommend, allowed me to get rid of Acrobat Reader for filling out and signing PDFs, plus a bunch more)
- Homeassistant
- Linux and Windows clients available for whenever you might need them (not often, but can come in handy)
- Borg client, backing up parts of my NAS to a cloud storage box
- OPNSense backup for the hardware firewall
- Forgejo
On a bare metal machine at a reputable cloud provider:
- my personal Email, Calendar, Contacts (super easy with Nix)
- another blocky instance
- another borg client
- Rustdesk server (OSS Teamviewer)
- wireguard that’s just used by my TV so crunchyroll thinks it’s in (other country), Lmao
Wishlist:
- Vaultwarden
- Immich, once added to nixpkgs
- PeerTube
- Pixelfed
If you want to keep everything inside a singular Nix configuration while still using Docker, you can check out the NixOS option
virtualisation.oci-containers
- essentially, a declarative way of managing docker/podman containers (similar to docker-compose) but with Nix.I know it’s been three weeks, but thanks for telling me about this! I might actually do this, for the projects here and there which aren’t packaged into nixpkgs (yet).
Any chance you could share any of your Nix config? I’m curious how it’s being used with Proxmox (I’m using ansible and terraform right now).
I thought about adding a link, but am a bit hesitant to de-anonymize myself on here 😅
But it’s basically this:
- Proxmox is not Nix configured. There’s a project for that, but IMO t’ll take a couple of years to be ready for production.
- I’ve created a custom nix module that essentially just sets my default values for stuff like bios type, boot order,… And allows to set CPU cores, RAM, IP,…
- all this does though is just setting the corresponding values from the nixos-generators proxmox output
- additionally, all the usual stuff is handled (user, known ssh keys, base config of the system)
- for each VM, I only have a single file containing the VM settings (ID, RAM, cpu, ip,…) and the service config for whatever the VM is for
- then lastly I have a custom script/shell that essentially just allows to do “nixvm-new <flake output name>” which generates the image, moves it to the nas, and calls on proxmox to import the image, plus some cleanup
TBH this sounds way more complicated than it is / feels to use 😄
Everything
…except email 😑
I self-host email, it certainly isn’t something I’d recommend
Yeah hosting email as a company is a pain. I can’t imagine selhosting it. At least in a company people can search you online.
The worst part really is just getting off the damn spam lists. There is almost no documentation anywhere for do’s and dont’s. I ultimately had to setup a sending relay for the mail on my status monitoring VPS because my residential IP triggered most spam filters, but I only found out that that was the problem from forum posts investigating the same problem. I check with stuff like mail-tester, get back perfect scores and yet most of my outgoing emails have a good chance to land in the spam folder anyway (but at least they get delivered so that’s a plus I guess)
As others in other threads have said: Google and Microsoft have killed the ability to self-host email simply by black-boxing their spam filters. As a user you have no real way to fix your mail server such that your emails get delivered into the inbox reliably.
I feel ya. And this doesn’t take in account users who put one of you mail in spam and it blacklist you for the whole org…
I self host jellyfin, nextcloud, owncast, tandoor, komga, photoprism and searxng. I use nginx proxy manager for a reverse proxy and SSL cert automation. Works great for me but I would like to get into traefik sometime.
I self host for privacy reasons, also it’s fun, it’s a learning opportunity and sometimes self-hosted services are functionally better than the other options out there.
I use nginx proxy manager for a reverse proxy and SSL cert automation. Works great for me but I would like to get into traefik sometime.
I got tired of the NPM and went to traefik for 2 reasons.
-
NPM kept locking me out of my account (admin), like 4 times during the time I was using it. That meant that it was not reliable enough for daily use.
-
From what I heard is that the NPM project only has 1 developer and so they can’t really respond and fix security flaws in a proper timeframe.
I’m using traefik now for internal traffic while VPN in if I need internal services while out and about.
Jim’s Garage has a great YouTube video on setting it up.
How did you set up a VPN to securely connect to your services over the internet? I have looked for guides to do this and haven’t had much luck. I would really like to implement this in my setup.
I can once again refer to Jim’s Garages video about setting up wireguard on Docker. Very easy.
Wg-easy, with a nice interface.
Thank you, I wasn’t sure if that video was re: Traefik or VPN. I appreciate the suggestion.
From what I heard is that the NPM project only has 1 developer and so they can’t really respond and fix security flaws in a proper timeframe.
It’s mostly just nginx with a webui. You can even see the nginx config files if you bash into the container. It has the same bugs as upstream nginx. Do not expose the management port to the internet.
Plus compared to normal nginx, it’s harder to misconfigure it. Most of my services are just the default config, so I can’t mess it up accidentally.
About lockouts: Once also happened me, but that was just a messed up update, next update fixed itself. If you lock yourself out you can usually edit the db directly, it defaults to sqlite, but I used it with mariadb.
-
It started with Emby and pihole. I’m now up to about 30 different services from Vault, email, 3CX, home assistant, firefox, podgrab etc.
Really just video for me, I can’t handle paying for streaming anymore.
For sure anything with private data involved, aside from my email.
So everything to do with images, videos, file/document storage, etc…
Also game servers because they’re generally very easy to host at home, and due to generally high RAM and storage needs paying for hosting can be quite pricey.
Also game servers because they’re generally very easy to host at home, and due to generally high RAM and storage needs paying for hosting can be quite pricey.
Really?
I thought this was more the case with flexible providers like DigitalOcean. My current provider charges 5,36€ per month for 4 cores (though I assume this corresponds rather to 2 SMT-enabled cores), 6 GB of RAM and a 400 GB SSD. It offers better latency for most players (obviously not for myself) and in most cases has been sufficient regarding performance.
Fair, it does depend on what games you’re hosting. I often have multiple servers for different games running and some can use upwards of 10GB of RAM each when in use.
Highest I’ve had I think was an Avorion server that hit around 20GB of RAM usage with 5 or so players on.
I find that VPS cores are often very low performance cores, since they want high core density in their servers vs fewer high performance cores, and for games like Arma 3, Minecraft, Enshrouded, etc they really need high single thread performance to work well.
At the moment I am only doing jellyfin but I am looking to expand into pihole, audiobook shelf and some arr stack.
Jellyfin Plex (I wanted to get rid of it but I found my son’s TV has no Jellyfin client available so I have to keep Plex up for him) Nginx Caddy Ddclient to Cloudflare for my home dynamic IP Syncthing (such an underrated app) Wireguard HomeAssistant Some other stuff that isn’t all that interesting
The actual services I host? Mail
What do you use for that?
What types of things do you host and why?
Self-hosting as in at home, nothing to the outside world and i’m still sorting a local NAS; i have a VPS with a few websites but that’s not self-hosting category i guess.
I’d locally-host media stuff but not even that is that important to me atm. Next on my list is 3-2-1 backups so i can reorganize my setup and eventually selfhost a wiregard VPN to access some data.
I set up a mail stack on Rocky Linux with Postfix, Dovecot, and rspamd. I don’t need a database because it’s all LDAP on the backend, and I don’t have webmail setup right now because I’m lazy. It’s a bit of a hassle to get up and running well but it’s pretty solid and I’m careful about managing my domain reputation so I don’t have any issues with my mail being delivered.
You can use Roundcube for web mail
I just haven’t gotten around to setting it up is all.
What do you use for that?
Because emails can have a boatload of sensitive information (especially when collected en masse, think years and years of emails)… In the day of AI bullshit. Minimizing all that data being directly attached to an account associated with you and owned by google or some other corp seems like a sane desire. If you primary a gmail account… and they start (they probably already are) training on that dataset. Shit is going to get real testy.
I meant what software stack do you use to host your email.
Btw have you encountered issues with receiving/sending mail through that account, considering the ongoing cartelization?
Mailcow.
Personally. No. The hardest part is getting a clean IP and to setup PTR records for a static IP. The rest has been easy for me personally… but I do this shit for a living so I might be biased.
If you email to people on gmail or outlook, won’t Google and Microsoft still end up with copies of most of your mail?
Yes, but at the very least they have to do queries to build that profile out across dozens or hundreds of recipients… And they only get what I explicitly sent to them/their users.
Google collects 100% of the emails you’re getting on gmail and it’s already sent directly to you… so they see it completely… including emails being sent to other sources since it originates from their server (so collecting information that would be going to an MS Exchange server as well…).
Self hosting this means that you’re collecting your own shit… And companies can only get the outgoing side to their users. And never the full picture of your systems/emails.
This matters a lot more than you think. Lots of systems for automation sends through systems like Mailchimp, PHPmailer, etc… So those emails from your doctor likely never originated from MS or Google to begin with. When it hits your inbox on Gmail or Outlook… Well now it’s on their system. Now they can analyze it.
PiHole, Plex and the related “*arr” apps. I also self-host my home automation platform (Home Assistant).
Me too, except it’s Adguard for me.
Came in handy yesterday actually. I have a friend who works for a University which was recycling some Chromebooks.
He managed to grab 3 for me, one for myself and one for my kids.
Problem is that one of my kids is being supervised through Google Family Link which means for some reason the Play Store won’t work.
So he is now unsupervised in Family Link just to get the Chromebook working.
So I’ve just given both my kids static IPs and pointed their Chromebooks at Adguard, then turned on Safe Search and adult content blocking.
Now I’m fairly confident they’re protected from a lot of the bad shit on the internet.
I’ve configured my kids devices to use NextDNS, that way they are getting filtering no matter what network they use.
AdGuard does what I need internally, it’s just external is the issue. VPN’s are not a solution, my kids are old enough to know they can just disable it to work around it. They don’t know about the Private DNS option that I have configured on their devices… Yet
pihole, in front of my own DNS, because it’s easier to have them to domain filtering.
mythtv/kodi, because I’d rather buy DVDs than stream; rather stream than pirate; but still like to watch the local news.
LAMP stack, because I like watching some local sensor data, including fitness equipment, and it’s a convenient place to keep recipes and links to things I buy regularly but rarely (like furnace filters).
Homeassistant, because they already have interfaces to some sensors that I didn’t want to sort out, and it’s useful to have some lights on timers.
I also host, internally, a fake version of quicken.com, because it lets me update stock quotes in Quicken2012 and has saved me having to upgrade or learn a new platform.
Do you have any input on whether running your Pi-Hole as your DNS service versus how you have it, with pi-hole in front of a standalone DNS server, as to which is functionally “more better?”
I had been toying with making my pi-hole into a full DNS server using Unbound, but I had been debating if it would be better to have that service running seperately.
I have isc-bind running behind pihole so network clients can register their own hostnames, and as near as I can tell, that’s outside the scope of pihole’s DHCP and dnsmasq. Pihole alone is probably fine if you only want to name static hosts, but (I understand) Unbound doesn’t support ddns, either.
Unbound will take updates via API. You could either write exit hooks on your clients, or use the “on commit” event on isc-dhcp-server to construct parameters and execute a script when a new lease is handed out.
Unbound is incredibly lightweight. There’s no reason not to just have it running on the same box as your pihole.
For media, I host the some of the arr apps, qbittorrent, Jellyfin, gpodder2go, and navidrome. For personal photos, I host PhotoPrism. I host a file sharing service fileshelter, and a link shortening service chhoto-url. I host Wiki.js for mostly recipes, and some notes. I’ve recently started hosting Forgejo for my git repos. I also host SageMath for computation, it’s especially useful when I only have my phone with me and need to use it. I use caddy as a reverse proxy and serve these through a VPS using a Wireguard tunnel.
I host way more than I probably should, but everyone should have some stuff like immich, vaultwarden, and nextcloud. I also like to host gitea and 30+ other things (check out netboot.xyz, it isn’t something everyone needs but why wouldn’t you want to be able to boot off the network), but that’s just what some people do as a hobby I guess lol.
I just setup netboot.xyz this evening as an experiment. Is pretty cool.
Nothing federated. I respect everyone who makes it possible, and there’s an actual path to me being willing to participate, unlike corporate social media, but the level of exposure/overhead to prevent having genuinely bad shit touch my server is not something I’m comfortable with. I want stuff I can ignore for a week and not have the end of the world happen, which means at most user generated content from people I know personally.
In terms of what I’m currently hosting, just some mild personal content servers and a discord bot running a couple games on small servers with friends.
I’d like to get further into a personal site, to share my pictures/videos with friends, document/share my reading in ways goodreads and available alternatives don’t do, and similar things like that that I genuinely am fine if no one looks at, but I can tell a friend “yeah, these are my favorite psychology books with a blurb on each”, and “these are my favorite fiction series (actually organized by series as first class citizens, because no one really does that) with quick summaries of what I like about them”, etc. I do a couple of the lists on goodreads, but you can’t do blurbs on series, do lists by series, it won’t even display your lists ordered or with your reviews properly included any more, and ultimately I’m going to track it all anyways so I want it structured and displayed in a way that actually makes sense to me.
I don’t really want social media features and I definitely don’t want to try to “grow it” or any of that nonsense, but ultimately I want to better track and organize all of that and don’t really love the tools available, so rolling my own and “I might as well pretty up the presentation and make some of it public facing to discuss with friends” once I get the proper structuring handled.