My Homelab Setup
Posted by photon_collider 2 days ago
Comments
Comment by linsomniac 2 days ago
In Bitwarden they allow you to configure the matching algorithm, and switching from the default to "starts with" is what I do when I find that it is matching the wrong entries. So for this case just make sure that the URL for the service includes the port number and switch all items that are matching to "starts with". Though it does pop up a big scary "you probably didn't mean to do this" warning when you switch to "starts with"; would be nice to be able to turn that off.
Comment by PunchyHamster 1 day ago
In homelab space you can also make wildcard DNS pretty easily in dnsmasq, assuming you also "own" your router. If not, hosts file works well enough.
There is also option of using mdns for same reason but more setup
Comment by overfeed 1 day ago
Bitwarden annoyingly ignores subdomains by default. Enabling per-sudomain credential matching is a global toggle, which breaks autocomplete on other online service that allow you to login across multiple subdomains.
Comment by danparsonson 1 day ago
Comment by rodolphoarruda 1 day ago
Comment by freeplay 1 day ago
Comment by Groxx 1 day ago
Comment by c-hendricks 1 day ago
Comment by nerdsniper 1 day ago
Comment by simondotau 1 day ago
For things like Home Assistant I use the following subdomain structure, so that my password manager does the right thing:
service.myhouse.tld
local.service.myhouse.tldComment by c-hendricks 1 day ago
Comment by tehlike 1 day ago
Comment by gerdesj 1 day ago
DNS. SNI. RLY?
Comment by sv0 1 day ago
On Debian/Ubuntu, hosting local DNS service is easy as `apt-get install dnsmasq` and putting a few lines into `/etc/dnsmasq.conf`.
Comment by merpkz 1 day ago
Comment by tbyehl 1 day ago
Comment by predkambrij 1 day ago
Comment by timwis 1 day ago
Comment by dpoloncsak 1 day ago
Then, I use Tailscale to connect everything together. Tailscale lets you use a custom DNS, which gets pointed to the PiHole. Phone blocks ads even when im away from the house, and I can even hit any services or projects without exposing them to the general internet.
Then I setup NGINX reverse proxy but that might not be necessary honestly
Comment by brownindian 1 day ago
1. your 1password gets a different entry each time for <service>.<yourdomain>.<tld>
2. you get https for free
3. Remote access without Tailscale.
4. Put Cloudflare Access in front of the tunnel, now you have a proper auth via Google or Github.
Comment by lukevp 1 day ago
Comment by organsnyder 1 day ago
Comment by johnmaguire 1 day ago
Comment by sylens 1 day ago
Comment by QGQBGdeZREunxLe 1 day ago
Comment by miloschwartz 1 day ago
Comment by somehnguy 1 day ago
No open ports on my internal network, Tailscale handles routing the traffic as needed. Confirmed that traffic is going direct between hosts, no middleman needed.
Comment by arvid-lind 1 day ago
Comment by mvdtnz 1 day ago
Comment by lloydatkinson 2 days ago
Comment by cortesoft 1 day ago
Matching on base domain as the default was surprising to me when I started using Bitwarden... treating subdomains as the same seems dangerous.
Comment by akersten 1 day ago
Actually it's mostly financial institutions that I've seen this happen with. Have to wonder if they all share the same web auth library that runs on the Z mainframe, or there's some arcane page of the SOC2 guide that mandates a minimum of 3 redirects to confuse the man in the middle.
Comment by tylerflick 2 days ago
Comment by techcode 1 day ago
You don't need to have any real/public DNS records on that domain, just own the domain so LetsEncrypt can verify and give you SSL certificate(s).
You setup local DNS rewrites in AdGuard - and point all the services/subdomains to your home servers IP, Caddy (or similar) on that server points it to the correct port/container.
With TailScale or similar - you can also configure that all TailScale clients use your AdGuard as DNS - so this can work even outside your home.
Thats how I have e.g.: https://portainer.myhome.top https://jellyfin.myhome.top ...etc...
Comment by dewey 2 days ago
Comment by domh 2 days ago
https://tailscale.com/docs/features/tailscale-services
Then you can access stuff on your tailnet by going to http://service instead of http://ip:port
It works well! Only thing missing now is TLS
Comment by avtar 2 days ago
> tailscale serve --service=svc:web-server --https=443 127.0.0.1:8080
> http://web-server.<tailnet-name>.ts.net:443/ > |-- proxy http://127.0.0.1:8080
> When you use the tailscale serve command with the HTTPS protocol, Tailscale automatically provisions a TLS certificate for your unique tailnet DNS name.
So is the certificate not valid? The 'Limitations' section doesn't mention anything about TLS either:
https://tailscale.com/docs/features/tailscale-services#limit...
Comment by domh 1 day ago
Comment by nickdichev 1 day ago
Comment by avtar 1 day ago
Comment by altano 1 day ago
Comment by oarsinsync 1 day ago
I'm guessing this is 1Password 8 only, as I can't see this option in 1Password 7.
Comment by vladvasiliu 1 day ago
Comment by jorvi 1 day ago
Comment by mhurron 1 day ago
Comment by karlshea 1 day ago
Comment by miloschwartz 1 day ago
Comment by wrxd 2 days ago
Comment by dewey 2 days ago
Comment by zackify 2 days ago
Problem solved ;)
Comment by m463 1 day ago
Comment by ozim 1 day ago
If you expose something by mistake still should be fine.
Big problem with PW reuse is using the same for very different systems that have different operators who you cannot trust about not keeping your PW in plaintext or getting hacked.
Comment by photon_collider 1 day ago
Comment by harrygeez 1 day ago
Comment by acidburnNSA 2 days ago
* nginx with letsencrypt wildcard so I have lots of subdomains
* No tailscale, just pure wireguard between a few family houses and for remote access
* Jellyfin for movies and TV, serving to my Samsung TV via the Tizen jellyfin app
* Mopidy holding my music collection, serving to my home stereo and numerous other speakers around the house via snapcast (raspberry pi 3 as the client)
* Just using ubuntu as the os with ZFS mirroring for NAS, serving over samba and NFS
* Home assistant for home automation, with Zigbee and Z-wave dongles
* Frigate as my NVR, recording from my security cams, doing local object detection, and sending out alerts via Home Assistant
* Forgejo for my personal repository host
* tar1090 hooked to a SDR for local airplane tracking (antenna in attic)
This all pairs nicely with my two openwrt routers, one being the main one and a dumb AP, connected via hardwire trunk line with a bunch of VLANs.
Other things in the house include an iotawatt whole-house energy monitor, a bunch of ESPs running holiday light strips, indoor and outdoor homebrew weather stations with laser particulate sensors and CO2 monitors (alongside the usual sensors), a water-main cutoff (zwave), smart bulbs, door sensors, motion sensors, sirens/doorbells, and a thing that listens for my fire alarm and sends alerts. Oh and I just flashed the pura scent diffuser my wife bought and lobotomized it so it can't talk to the cloud anymore, but I can still automate it.
I love it and have tons of fun fiddling with things.
Comment by VladVladikoff 1 day ago
Comment by zem 1 day ago
yikes!
Comment by bjackman 1 day ago
(Probably a lot of the services I run don't even really support HA properly in a k8s system with replicas. E.g. taking global exclusive DB locks for the lifetime of their process)
Comment by embedding-shape 1 day ago
Huh, why? I have a homelab, I don't have any downtime except when I need to restart services after changing something, or upgrading stuff, but that happens what, once every month in total, maybe once every 6 months or so per service?
I use systemd units + NixOS for 99% of the stuff, not sure why you'd need Kubernetes at all here, only serves to complicate, not make things simple, especially in order to avoid downtime, two very orthogonal things.
Comment by bjackman 1 day ago
So... you have downtime then.
(Also, you should be rebooting regularly to get kernel security fixes).
> not sure why you'd need Kubernetes at all here
To get HA, which is what we are talking about.
> only serves to complicate
Yes, high-availability systems are complex. This is why I am saying it's not really feasible for a homelabber, unless we are k8s enthusiasts I think the right approach is to tolerate downtime.
Comment by embedding-shape 1 day ago
5 seconds of downtime as you change from port N to port N+1 is hardly "downtime" in the traditional sense.
> To get HA, which is what we are talking about.
Again, not related to Kubernetes at all, you can do it easier with shellscripts, and HA !== orchestration layer.
Comment by flipped 1 day ago
Comment by flipped 1 day ago
Comment by furst-blumier 1 day ago
Comment by ryukoposting 1 day ago
Comment by flipped 1 day ago
Comment by wbjacks 1 day ago
Comment by pajamasam 2 days ago
Comment by c-hendricks 2 days ago
It's an i7-4790k from 12 years ago, it barely breaks a sweat most hours of the day.
It's not really that impressive, or (not to be a jerk) you've overestimated how expensive these services are to run.
Comment by hypercube33 1 day ago
Comment by pajamasam 2 days ago
Comment by decryption 1 day ago
Comment by shiroiuma 1 day ago
Comment by renehsz 1 day ago
Comment by hombre_fatal 1 day ago
They recommend 1GB RAM per 1TB storage for ZFS. Maybe they mean redundant storage, so even 2x16TB should use 16GB RAM? But it's painful enough building a NAS server when HDD prices have gone up so much lately.
The total price tag already feels like you're about to build another gaming PC rather than just a place to back up your machines and serve some videos. -_-
That said, you sure need to be educated on BTRFS to use it in fail scenarios like degraded mode. If ZFS has a better UX around that, maybe it's a better choice for most people.
Comment by renehsz 1 day ago
Otherwise, the only benefit more RAM gets you is better performance. But it's not like ZFS performs terribly with little RAM. It's just going to more closely reflect raw disk speed, similar to other filesystems that don't do much caching.
I've run ZFS on almost all my machines for years, some with only 512MiB of RAM. It's always been rock-solid. Is more RAM better? Sure. But it's absolutely not required. Don't choose a different file system just because you think it'll perform better with little RAM. It probably won't, except under very extreme circumstances.
Comment by c-hendricks 1 day ago
Comment by acidburnNSA 1 day ago
Way way overspeced for what I listed, but I use it for lots of video processing, numerical simulations, and some local AI too.
I have a similar subset of this stuff running at my mom's house on a 16 GB ram Beelink minicomputer. With openvino frigate can still do fully local object detection on the security case, whish is sweet.
Comment by drnick1 2 days ago
Comment by pajamasam 1 day ago
Comment by embedding-shape 1 day ago
Comment by drnick1 1 day ago
Comment by cyberpunk 2 days ago
Comment by tclancy 2 days ago
Comment by flipped 1 day ago
Comment by TacticalCoder 1 day ago
Not GP but I have lots of fun running VMs and lots of containers on an old HP Z440 workstation from 2014 or so. This thing has 64 GB of ECC RAM and costs next to nothing (a bit more now with RAM that went up). Thing is: it doesn't need to be on 24/7. I only power it up when I first need it during the day. 14 cores Xeon for lots of fun.
Only thing I haven't moved to it yet is Plex, which still runs on a very old HP Elitedesk NUC. Dunno if Plex (and/or Jellyfin) would work fine on an old Xeon: but I'll be trying soon.
Before that I had my VMs and containers on a core i7-6700K from 2015 IIRC. But at some point I just wanted ECC RAM so I bought a used Xeon workstation.
As someone commented: most services simply do not need that beefy of a machine. Especially not when you're strangled by a 1 Gbit/s Internet connection to the outside world anyway.
For compilation and overall raw power, my daily workstation is a more powerful machine. But for a homelab: old hardware is totally fine (especially if it's not on 24/7 and I really don't need access to my stuff when I sleep).
Comment by leptons 1 day ago
It does have 16 spinning disks in it, so I accept that I pay for the energy to keep them spinning 24/7, but I like the redundancy of RAID10, and I have two 8-disk arrays in the machine. And a Ryzen-7 5700G, 10gbit NIC, 16 port RAID card, and 96GB of RAM.
Comment by shellwizard 1 day ago
In my case I fell in love with the tiny/mini/micros and have a refurbish Lenovo m710q running 24/7 and only using 5W when idling. I know it doesn't support ECC memory or more than 8 threads, but for my use case is more than enough
Comment by gessha 1 day ago
Comment by leptons 1 day ago
Comment by matja 1 day ago
Comment by leptons 1 day ago
I do have a bit more than just that server hooked up to it. There's also a Dell i5 running DDWRT as my main gateway/router, the fiber internet modem, a small Synology NAS, a couple of WIFI routers, etc. It all adds up.
That doesn't include my backup server out in the garage with another 8-disk RAID10 array and an LTO tape drive that is often backing up data, 5 more WIFI routers around the property, and 10 or so security cameras. So I'm probably well over $100/mo for all my tech stuff.
Comment by jamiemallers 1 day ago
Comment by xoa 2 days ago
Clearly it's worked for them here, and I'm happy to see it. Maybe the bug will truly bite them but there's so much incredibly capable hardware now available for a song and it's great to see anyone new experiment with bringing stuff back out of centralized providers in an appropriately judicious way.
Edit: I'll add as well, that this is one of those happy things that can build on itself. As you develop infrastructure, the marginal cost of doing new things drops. Like, if you already have a cheap managed switch setup and your own router setup whatever it is, now when you do something like the author describes you can give all your services IPs and DNS and so on, reverse proxy, put different things on their own VLANs and start doing network isolation that way, etc for "free". The bar of giving something new a shot drops. So I don't think there is any wrong way to get into it, it's all helpful. And if you don't have previous ops or old sysadmin experience or the like then various snags you solve along the way all build knowledge and skills to solve new problems that arise.
Comment by ryandrake 1 day ago
Just like you don't really need the official Pi-hole software. It's a wrapper around dnsmasq, so you really just need dnsmasq.
A habit of boiling your application down to the most basic needs is going to let you run a lot more on your lab and do so a lot more reliably.
Comment by rpcope1 1 day ago
Hardware is kind of the same deal; you can buy weird specialty "NAS hardware" but it doesn't do well with anything offbeat, or you can buy some Supermicro or Dell kit that's used and get the freedom to pick the right hardware for the job, like an actual SAS controller.
Comment by dizhn 1 day ago
Comment by shiroiuma 1 day ago
That's exactly what TrueNAS is these days: it's Debian + OpenZFS + a handy web-based UI + some extra NAS-oriented bits. You can roll your own if you want with just Debian and OpenZFS if you don't mind using the command line for everything, or you can try "Cockpit".
The nice thing about TrueNAS is that all the ZFS management stuff is nicely integrated into the UI, which might not be the case with other UIs, and the whole thing is set up out-of-the-box to do ZFS and only ZFS.
Comment by globular-toast 1 day ago
But for my own sanity I prefer out of the box solutions for things like my router and NAS. Learning is great but sometimes you really just need something to work right now!
Comment by flipped 1 day ago
Comment by flipped 1 day ago
Comment by lostlogin 2 days ago
The fiasco you can cause when you try fix, update, change etc makes this my favourite too.
Household life is generally in some form of ‘relax’ mode in evening and at weekends. Having no internet or movies or whatever is poorly tolerated.
I wish Apple was even slightly supportive of servers and Linux as the mini is such a wicked little box. I went to it to save power. Just checked - it averaged 4.7w over the past 30 days. It runs Ubuntu server in UTM which notably raises power usage but it has the advantage that Docker desktop isn’t there.
Comment by xoa 2 days ago
I think some of the difference between "self-hosted" vs "homelab" is in the answer to the question of "What happens if this breaks end of the day Friday?" An answer of "oh merde of le fan, immediate evening/weekend plans are now hosed" is on the self-hosted end of the spectrum, whereas "eh, I'll poke at it on Sunday when it's supposed to be raining or sometime next week, maybe" is on the other end. Does that make sense? There are a few pretty different ways to approach making your setup reliable/redundant but I think throwing more metal at the problem features in all of them one way or another. Plus if someone moves up the stack it can simply be a lot more efficient and performant, the sort of hardware suited for one role isn't necessarily as well suited for another and trying to cram too much into one box may result in someone worse AND more expensive then breaking out a few roles.
But probably a lot of people who ended up doing more hosting started pretty simple, dipping their toes in the water, seeing how it worked out and building confidence. And having everything virtualized on a single box is a pretty easy and highly flexible way get going and experiment. Also if it's on a ZFS backing makes "reset/rollback world" quite straight forward with minimal understanding given you can just use the same snapshot mechanism for that as you do for all other data. Issues with circular dependencies and the like or what happens if things go down when it's not convenient for you to be around in person don't really matter that much. I think anything that lowers the barrier to entry is good.
Of course, someone can have some of each too! Or be somewhere along the spectrum, not at one end or another.
Comment by lostlogin 2 days ago
Docker-compose isn’t a backup, but from a fresh ubuntu server install, it’ll have me back in 20 mins. Backing up the entire VM isn’t too hard either.
I was in a really sweet spot and then ESXi became intolerable. Though in fairness their website was alway pure hell.
Comment by lostlogin 2 days ago
Docker-compose isn’t a backup, but from a fresh ubuntu server install, it’ll have me back in 20 mins. Backing up the entire VM isn’t too hard either.
I was n a really sweet spot and then ESXi became intolerable. Though in fairness their website was alway pure hell.
Comment by vermaden 2 days ago
Big downgrade after moving to Linux:
- https://vermaden.wordpress.com/2024/04/20/truenas-core-versu...
Comment by photon_collider 1 day ago
I definitely will want to have a dedicated NAS machine and a separate server for compute in the future. Think I'll look more into this once RAM prices come back to normal.
Comment by PunchyHamster 2 days ago
Really, we should rename that kind of devices to HSSS (Home Service Storage Server)
Comment by globular-toast 1 day ago
I really prefer storage just being storage. For security it makes a lot of sense. Stuff on my network can only access storage via NFS. That means if I were to get malware on my network and it corrupted data (like ransomware), it won't be able to touch the ZFS snapshots I make every hour. I know TrueNAS is well designed and they are using Docker etc, but it still makes me nervous.
I guess when I finally have to replace my NAS I'll have to go Linux, but it'll still be just a NAS for me.
Comment by freetonik 2 days ago
Comment by natterangell 2 days ago
Comment by reddalo 1 day ago
Comment by bluehatbrit 1 day ago
Comment by lowdude 1 day ago
I still have to check if this actually works in practice, but I am hopeful. I based it off their documentation here: https://docs.hetzner.com/storage/object-storage/faq/s3-crede...
Comment by reddalo 1 day ago
The main problem is that it sometimes slows down to a crawl, or requests fail altogether.
Comment by polairscience 2 days ago
Which is to say, hardware is cheap, software is open, and privacy is very hard to come by. Thus I've been thinking I'd like to not use cloud providers and just keep a duplicate system at a friends, and then of course return the favor. This adds a lot of privacy and quite a bit of redundancy. With the rise of wireguard (and tailscale I suppose), keeping things connected and private has never been easier.
I know that leaning on social relationships is never a hot trend in tech circles but is anyone else considering doing this? Anyone done it? I've never seen it talked about around here.
Comment by nsbk 2 days ago
Comment by polairscience 2 days ago
Comment by Root_Denied 1 day ago
I'm able to set it up so that my SO and I can view all the pictures taken by the other (mostly cute photos of our dog and kid, but makes it easier to share them with others when we don't have to worry about what device they're on), have it set to auto-backup, and routed through my VPS so it's available effectively worldwide.
The only issue that I run into is a recent one, which is hard drive space - I've got it on a NAS/RAID setup with backups sent to another NAS at my parents' place, but it's an expensive drive replacement in current market conditions.
Comment by michelsedgh 1 day ago
Comment by neop1x 1 day ago
Comment by nine_k 2 days ago
Hardware was cheap a year ago. Whoever managed to build their boxes full of cheap RAM and HDDs, great, they did the right thing. It will be some time until such an opportunity presents itself again.
Comment by Evan-Purkhiser 1 day ago
Whole thing cost around $500. Before that I was paying ~$35 a month for a Google workspace with 5TB of drive space. At one point in the past it was “unlimited” space for $15 a month. Figure the whole thing will pay for itself in the next couple of years.
Actually just finished the initial replication of my 10TB pool. I ran into a gnarly situation where zrepl blew away the initial snapshot on the source pool just after it finished syncing, and I ended up having to patch in a new fake “matching” snapshot. I had claude write up a post here, if you’ll excuse the completely AI generated “blog post”, it came up with a pretty good solution https://gist.github.com/evanpurkhiser/7663b7cabf82e6483d2d29...
Comment by bluedino 1 day ago
Have been doing this for 25 years.
If you have asymmetrical connections it's easiest to do the initial backup locally and then take your drive(s) to your friends house and then just sync/update.
Comment by Jedd 1 day ago
Syncthing has the 'untrusted peer' feature, which I've only used once, accidentally, but I believe provides an elegant way of providing some disk for a friend while maintaining privacy of the content.
Comment by mtsolitary 2 days ago
Comment by xandrius 2 days ago
The setup mentioned in the article has an avg 600 kWh/year as opposed to a pretty solid HP EliteDesk (my own homelab) which uses 100 kWh/year. Sure you don't get a GPU but for what it is used for, you might as well use a laptop for that.
Comment by firecall 1 day ago
If you are doing a DIY NAS with HDDs then you want real SATA ports. Or a well supported PCI card with SATA Ports, which you cant sensibly connect to a Laptop or micro PC. Sure, you might be able to use Thunderbolt to reliably hook up an external PCI chassis, but then you might as well buy a NAS at that point or use a full tower case with an ATX mobo!
Using an older Gaming PC you already have is actually a very good option for TrueNAS or OMV.
I took an older 10th Gen Intel Gaming PC we had, sold the core i9 CPU, and replaced it with an i7-10700T I found used on eBay.
I'm finding this setup to be better for my needs than various ex-lease Dell Micro PCs I've used in the past, mainly because of the reliability of the SATA ports.
I've found quality external Samsung T5 SSDs to be very reliable over USB with TrueNAS. But HDDs are a nightmare over USB for a NAS, in my experience.
I was hoping this might be the year that I can finally get rid of the spinning rust. But looks like AI data centres had other ideas! :-)
However, I will say that if you just want to run some virtualized Linux servers or similar, then ex-lease micro PCs are a fantastic deal and can be fun to setup and learn Proxmox and Truenas etc..
Comment by bpye 1 day ago
You could certainly install a SAS or SATA controller, the issue would be having somewhere to mount the drives, and a way to power them. External SAS enclosures are not cheap.
Comment by sambf 1 day ago
Comment by predkambrij 1 day ago
Comment by Havoc 1 day ago
A good AM4 board can do 7 nvme, 8 sata and ecc ram.
Comment by hparadiz 1 day ago
Comment by noname120 1 day ago
Comment by flipped 1 day ago
Comment by ivanjermakov 1 day ago
Comment by denkmoon 1 day ago
Comment by drnick1 1 day ago
Comment by benlivengood 2 days ago
Comment by nickorlow 1 day ago
(though they were halfway across the US from each other, and not town)
Comment by benlivengood 1 day ago
Comment by nickorlow 1 day ago
Comment by tuananh 1 day ago
many people with setup like this probably needs maybe a 4 cores low powered machine with idle consumption at ~5-10w
Comment by Semaphor 1 day ago
Comment by ErneX 1 day ago
I also have another xcp-ng host for other VMs running on a Dell OptiPlex Micro.
OP should configure DNS locally and reverse proxy each service, I use bind 9 and nginx for that.
Comment by seriocomic 1 day ago
Comment by kleebeesh 2 days ago
> Right now, accessing my apps requires typing in the IP address of my machine (or Tailscale address) together with the app’s port number.
You might try running Nginx as an application, and configure it as a reverse proxy to the other apps. In your router config you can setup foo.home and bar.home to point to the Nginx IP address. And then the Nginx config tells it to redirect foo.home to IP:8080 and bar.home to IP:9090. That's not a thorough explanation but I'm sure you can plug this into an LLM and it'll spell it out for you.
Comment by c-hendricks 2 days ago
You can then set your DNS in Tailscale to that machines tailnet IP and access your servers when away without having to open any ports.
And bonus, if it's pihole for dns you now get network-level Adblock both in and outside the home.
Comment by mnahkies 2 days ago
I've found this to work quite well, and the SSL whilst somewhat meaningless from a security pov since the traffic was already encrypted by wire guard, makes the web browser happy so still worthwhile.
Comment by pajamasam 2 days ago
Comment by verdverm 2 days ago
Comment by victorio 2 days ago
Comment by hk1337 2 days ago
Comment by cyberpunk 2 days ago
Comment by verdverm 2 days ago
Do you know how I might approach this better?
Comment by windexh8er 2 days ago
DevOpsToolbox did a great video on many of the reasons why Caddy is so great (including performance) [0]. I think the only downside with Caddy right now is still how plugins work. Beyond that, however it's either Caddy or Traefik depending on my use case. Traefik is so easy to plug in and forget about and Caddy just has a ton of flexibility and ease of setup for quick solutions.
Comment by verdverm 2 days ago
I use both, they are by and large substitutable. Nginx has a much larger knowledge base and ecosystem, the main reason I stick with it.
Comment by philsnow 2 days ago
One tricky thing about nginx though, from the "If is evil" nginx wiki [0]:
> The if directive is part of the rewrite module which evaluates instructions imperatively. On the other hand, NGINX configuration in general is declarative. At some point due to user demand, an attempt was made to enable some non-rewrite directives inside if, and this led to the situation we have now.
I use nginx for homelab things because my use-cases are simple, but I've run into issues at work with nginx in the past because of the above.
Comment by dwedge 2 days ago
Some people take this way too far, for instance I've send places compiling (end of life) modsec support into nginx instead of using the webserver it was built for
Comment by windexh8er 1 day ago
Traefik is far more capable, for example. If all you're doing is serving pages, sure.
Comment by ls612 2 days ago
Comment by Frotag 2 days ago
Comment by frumiousirc 2 days ago
Comment by anon7000 2 days ago
Comment by hazrmard 22 hours ago
[1]: https://github.com/ddclient/ddclient
[2]: https://kb.netgear.com/1058/What-is-Dynamic-DNS-DDNSComment by izacus 1 day ago
That is - handling laptop going to sleep during backup, laptop being on only for shorter periods of time, etc.?
Because I had issues with backup tooling which wouldn't resume if it got interrupted and expected for the machine to always run at certain hour of the day. I had examples where laptops wouldn't backup for months because they were only on for a short 30-60min bursts at the time and the backup tools couldn't handle piece-meal resume.
How does restic handle that?
Comment by calcifer 1 day ago
Comment by izacus 1 day ago
Comment by dizhn 1 day ago
It will resume from where it got interrupted. The only exception is the initial backup where it doesn't have a snapshot yet.
Comment by predkambrij 1 day ago
Comment by garyfirestorm 1 day ago
I would also suggest to use two instances of adguards - one as backup two instances of NPM.
Comment by kelvinjps10 1 day ago
Comment by ryukoposting 1 day ago
Comment by buybackoff 1 day ago
Comment by mcbuilder 1 day ago
Comment by hk1337 2 days ago
I ended up making my own dashboard app, not as detailed as Scrutiny because I just wanted a central place that linked to all my internal apps so I didn't have to remember them all and have a simple status check. I made my own in Go though because main ones I found were NodeJS and were huge resource hogs.
Comment by luzionlighting 1 day ago
In architectural lighting projects we often think in a similar way about fixture placement, wiring access and maintenance because poor planning becomes very visible once a space is finished.
Comment by succo 1 day ago
Comment by Prabhapa 1 day ago
Comment by gehsty 1 day ago
Comment by navigate8310 2 days ago
Comment by dizhn 1 day ago
Comment by PunchyHamster 2 days ago
Comment by EdNutting 2 days ago
Comment by SauntSolaire 1 day ago
Comment by drnick1 2 days ago
Comment by EdNutting 2 days ago
Edit: Tailscale has a fairly frank page on Wireguard vs Tailscale with suggestions on when to use which: https://tailscale.com/compare/wireguard
Comment by miloschwartz 1 day ago
Handles both browser-based reverse proxy access and client-based P2P connections like a VPN.
Comment by mattschaller 1 day ago
Comment by sbinnee 1 day ago
Comment by buckle8017 1 day ago
Comment by monkaiju 1 day ago
Comment by ritcgab 2 days ago
Comment by adrien_dev 1 day ago
Comment by adrien_dev 1 day ago
Comment by yowang 1 day ago
Comment by sgt 2 days ago
Comment by tclancy 2 days ago
Comment by sgt 2 days ago
Comment by skyberrys 2 days ago
Comment by tclancy 2 days ago
Comment by switchbak 2 days ago
Comment by akerl_ 2 days ago
Comment by PunchyHamster 2 days ago
Why do you need to dilute the term? There is nothing wrong with your NAS running 3 apps that you press update once a year not being called "homelab" but just "a NAS"
Comment by akerl_ 2 days ago
Nobody is diluting anything. This person posted the setup they have in their home. It’s their homelab.
It’s not diluting any terms for them to call it that. Their setup is just as much a homelab as somebody else’s 48U rack.
It’s just a dick move, and against the rules of the site, to see somebody’s earnest post about their tech setup and post a shallow dismissal about how their setup isn’t deserving of your imagined barrier to entry.
Comment by PunchyHamster 1 day ago
The whole idea of homelab (regardless of size) is learning first.
He just have home server. It's okay to call it that
Comment by akerl_ 1 day ago
Comment by Capricorn2481 1 day ago
Comment by tokyobreakfast 1 day ago
I'm happy for the OP and that it works for him. That said:
The equivalent of Joe Bloggs installing Linux onto an old laptop is neither curious nor interesting, let's not pretend it is because feelings.
Comment by akerl_ 1 day ago
It's also been on the front page for most of the day on its own merits. It's clear you don't like the article. The guidelines are clear that you're expected to either engage constructively or just move along.
Comment by anon7000 2 days ago
Comment by sgt 2 days ago
Comment by HelloUsername 2 days ago
I'm curious about its power consumption on idle, average use, and peak.
Comment by Scene_Cast2 2 days ago
The online activity of the homelab community leans towards those who treat it as an enjoyable hobby as opposed to a pragmatic solution.
I'm on the other side of the spectrum. Devops is (at best) a neutral activity; I personally do it because I strongly dislike companies being able to do a rug-pull. I don't think you'll see setups like mine too often, as there isn't anything to brag about or to show off.