While i get the whole homelab thing is exiting and a great learning experience, it's simply not worth the time and effort for the majority of people.
You will end up paying much more for your services, along with spending a ton of time maintaining it (and if you don't, you will probably find yourself on the end of a 0-day hack sometime).
In Northern/Western Europe, where power costs around €0.3/kWh on average, just the power consumption of a simple 4 bay NAS will cost you almost as much as buying Google Drive / OneDrive / iCloud / Dropbox / Jottacloud / Whatever.
A simple Synology 4 bay NAS like a DS923+ with 4 x 4TB Seagate Ironwolf drives will use between 150 kWh and 300 kWh per year (100% idle vs 100% active, so somewhere in between), which will cost you between €45 and €90 per year, and that's power alone. Factoring in the cost of the hardware will probably double that (over a 5 year period).
It's cheaper (and easier) to use public cloud, and then use something like Cryptomator (https://cryptomator.org/) to encrypt data before uploading it. That way you get the best of both worlds, privacy without any of the sysadm tasks.
Edit: I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you. Eventually those people won't be there anymore, and the memories you make with those people will matter far more to you in 20 years, than the €20/month you paid for cloud services.
Have you seen the prices for Google Drive et al? The NAS setup you describe (which I wouldn't consider worth the money for that little space) is what, 12GB with 1 parity drive?
Google One for 10TB is 274,99€/mo (at least in my country) so you'd make the entire nas price and subscription cost within a few months, let alone years.
There just aren't compelling public cloud for large sizes (My NAS is 30TB capacity and I'm using 18 right now) and even if you go the more complex loops with like S3 and whatnot you still get billed more than it's worth. Public cloud is meant for public files, there's a lot of costs you're paying for stuff you don't need like being fast to access from everywhere.
As a bare minimum, you should update your server and docker images daily, or at least whenever there's an update (which you won't know unless you check).
If you only access your homelab over VPN or similar, then by all means, update whenever you feel like it, but if you expose your services to the internet, you want to be damned sure there are no vulnerabilities in them.
The internet of today is not like it was 20 years ago. Today you're constantly being hammerede by bots that scan every single IPv4 address for open ports, and when they find something they record it in a database, along with information on what's running on that port (provided that information is available).
When (not if) a vulnerability for a given service is discovered, an attacker doesn't need to "hunt & peck" for vulnerable hosts, they already have that information in a database, and all they need to do is start shooting at their list of hosts.
You can use something like shodan.io to see what a would be attacker might see (can check your own IP with "ip:xxx.xxx.xxx.xx".
Try entering something like Synology, Proxmox, Truenas, Unraid, Jellyfin, Plex, Emby, or any of the other popular home services.
> As a bare minimum, you should update your server and docker images daily, or at least whenever there's an update (which you won't know unless you check).
It's pretty easy to soft expose yourself too now with things like cloudflare tunnels without a lot of the security risks. You can put all access behind an secret/API key or OAuth login easily.
I definitely need to get my security hygiene up to snuff, but let me ask you, since using a reverse proxy (caddy in my case) refuses connections without a domain, would the scans reveal anything about my host if they don’t know the URL of my jellyfin instance?
If it were me doing this, either Zerotier or Tailscale. They aren't strictly VPN's in a traditional sense, but they largely achieve the same ends, and Zerotier's been much more flexible and performant than anything else I've ever tried.
But on a homelab you can host any service you want and start/stop it whenever you need to. Sure cloud storage might cost less in the short-term, but if you need more storage or more services, a selfhosted option is far cheaper.
VPS are very expensive for what you get. If you have the capital, doing it yourself saves you money very quickly. It's not rare to pay $50 for a semi-decent VPS, but for $2000 you would get an absolute beast that would last 10 years at the very least.
With Docker, maintenance is basically zero and unused services are stopped or restarted with 1 command.
How many services do you need ? And how much CPU power do those services need ?
I've also self hosted for decades, but it turns out i don't really need that much, at least not public.
I basically just need mail, calendar, file storage and DNS adblocking. I can get mail/calendar/file storage with pretty much any cloud provider (and no, there is no privacy when it comes to mail, there is always another participant in the conversation), and for €18/year i can get something like NextDNS, Control D, or similar.
For reference, a Raspberry Pi 4 or 5 will use around 50 kWh per year, which (again in europe) will translate to €15/year. For just €3 more per year i get redundancy and support.
I still run a bunch of stuff at home, but nothing is opened to the public internet. Everything runs behind a Wireguard VPN, and i have zero redundant storage. My home storage is used for backing up cloud storage, as well as storing media and client backups. And yes, i also have another cloud where i backup to.
My total cloud bill is around €20-€25/month, with 8TB of storage, ad blocking DNS, mail/calendar/office apps and even a small VPS.
I did not do the price calculations (in France, and I prefer not to know :) but I host many things except for mail and calendar (mail is tricky to host). Of these 29 services, I use maybe 4 daily and 15 monthly. They are well protected, easy to maintain, and serve family and friends.
The others' points are valid. Google Drive is rather expensive, Hetzner is cheaper and works well enough.
However, it also depends on how you use that data. In my case, I'm a Sunday photographer, so I tend to wrangle multiple GB of data at a time. I usually edit my photos locally, but I sometimes will want to revisit older stuff. I can download it, but it's a PITA and s_l_o_w. Google drive file stream is terrible for this, you never know if the files are uplaoded or not. Onedrive isn't any better. I haven't tried dropbox.
Hetzner has some storage box which exposes SMB but doesn't seem to enforce encryption nor IP filtering, so I'm not very comfortable with that.
Also, my internet connection pretends I have 5 Gb down, 0.5 up. The down part is usually as expected (my machine only has a 1 Gb nic), but upload is sometimes very slow. Running a local NAS is much faster. It's ZFS, so backups are trivial to send to encrypted offsite storage.
It also doesn't need to run 24/7, which helps with power usage (0.22 €/kWh here).
> I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Indeed. Waiting around for files to transfer gets old quick. I have better things to do with my time. My NAS needs a whopping five minutes of my time every now and again when a new kernel comes out.
Once you know how to build and maintain infrastructure you realize that while it's nice to know how to do it, it's not cost-effective.
The thing is, it's worth it to learn. Do you know the basics of how to set up a completely redundant environment? There's no conceptual difference between setting one up at home by using consumer equipment and setting it up in a data center. You can get pretty capable equipment (Mikrotik) for less. The enterprise stuff has more configuration options, but it's doubtful that you'll use most of them.
Set up backup WANs, redundant routing, DNS, power, etc is fun. Setting up redundant load balancers, backend services, databases, etc is also fun. It's not hard to do, it's just hard to get it all right. There are probably a zillion configuration parameters you can mess with, and only a few sets actually work. Unfortunately, the sets that work in your home won't be the ones that will work in production, but you could possibly run load tests etc to simulate a real environment (though simulating multiple clients from multiple endpoints is harder than you think).
And of course, getting production equipment is hard. Nobody has 2 F5s lying around. And you really need at least 4 F5s, because you have redundant locations. That's a lot of cashola. And in most environments you wouldn't want some random person messing around with the production (or test) F5s. It's the same with NAS, VM servers, docker registries, etc.
I suspect getting the whole end-to-end setup isn't something people experience anymore, because small companies have (or at least should have) moved to the cloud by now.
Not everything that seems "interesting" is worth it to learn from an economic perspective. Could it be worthwhile for someone studying for the A+ Computer Technician test? Maybe. Could it be worth it for someone looking to impress their boss Harry? Possibly, if Harry also controls your pay and has a penchant for overpaying for locally run infrastructure and a distrust of the cloud. Possibly. These kinds of investments are based evaluated at the individual level --- not everyone will benefit. Some may find themselves no more competitive in the job market as your average IT clown, but as always, results will vary.
do you need more space than 2TB ? (excluding things you've downloaded from the internet)
Very few people i know has use for that much storage. Yes, you can download the entire netflix catalog, and that will of course require more storage, and no, you probably shouldn't put it in the cloud (or even back it up, or use redundant storage for it).
Setting up your own homelab to be your own netflix, but using pirated content, is not really a use case i would consider. I'm aware people are doing it, and i still think it's stupid. They're saving money by "stealing" (or at least breaking laws), which is hardly a lifehack.
My wife is a professional photographer, and while we do archive most of her RAW files somewhere else, pretty much everything HEIC, JPEG or any other compressed format lives in our main cloud.
We have 2.2TB in total for “direct storage”, and we’re currently using around 1.5TB, and that’s including myself and our kids.
My personal photo library has just short of 90,000 photos, and about 5,000 videos. My wife’s library is roughly twice that. I have no idea how many photos the kids have, but they each take up around 200GB for photos.
And then we have backups, which actually take up about 1TB per person, mostly because that’s the space I’ve allocated for each, so history just grows until it’s filled. Photos ideally won’t change much. We backup originals along with XMP metadata for edits, so the photos stays the same, and changes are described in easily compressed text files. Backups of course also have deduplication enabled.
You are moving the goalposts and supporting your generic point only under very narrow assumed conditions.
There’s always a “right tool for the job”. Sometimes it’s the cloud. Sometimes it isn’t. The article is for people who found the cloud isn’t a good fit and need something at home.
A lot of people have large collections of music or movies. Or want to keep full control over some data no matter the cost. Or need it to work without internet. There are many solid reasons to avoid the cloud and use your own solution.
You are arguing that your original assertion isn’t wrong, people’s stated needs must be wrong. Because you have different needs so others must be doing it wrong. And this undermines everything else you say.
> (excluding things you've downloaded from the internet)
Why on earth would I do that? My storage includes things I downloaded from the internet that are not there anymore/hard to find/now paywalled. If you were thinking the only thing to download from the internet is pirated media - I haven't included that in my >2TB assessment.
My homelab is my hobby. I maintain it for my pleasure and to learn new skills. We have an infra nerds club with a few colleagues and we're having a lot of fun comparing our approaches!
As a Northern European (Finland) I can tell you that the electricity cost here for last year was closer to 0,1 € per kWh including the transfer fee and taxes. Additionally, more than 40 % of houses here have electric heating. The heating season starts in early autumn and ends in late spring, lasting 8 to 9 months depending on the year. As the electricity used by the device is turned into heat, during the heating season running it costs nothing.
Yeah, northern scandinavia has plenty of renewable energy.
As for electric heating, that is true in 1:1 heating scenarios, but i assume you guys are also using heat pumps these days, and while you still get heat "for free", it will not be anywhere as efficient as your heat pump.
Yes, it's probably peanuts in the grand scheme of things, i know our air to water heat pump in Denmark uses around 4500-5500 kWh per year, so adding another 100 kWh probably won't mean much.
The premium plan from Google has 2 TB and costs about the same annually as the electricity for the NAS that the GP comment suggested for comparison (at 100% usage). So at the same ONGOING cost (not even counting initial investment), the NAS has 8 times more storage. 16 times if you assume it will be mostly idle. Except if you want high availability with RAID, then you're back to 8 times. And haven't yet thought about backups.
All this assuming that you even need that much storage, which most people definitely do not.
I'm willing to bet that far more data has been lost to people serving their own data, than Google has lost data.
In any case, you should always make backups regardless of where your data is stored. At home, your biggest threat is loss of data, probably through hardware malfunction, house fires or similar.
In the cloud your biggest threat is not loss of data but loss of access to data. Different scenarios but identical outcomes.
Backup solves both scenarios, RAID doesn't solve any of them, but sadly, many people think "oh but I've got RAID6 so surely I cannot lose data".
How much space do you realistically need high availability, redundant storage for ?
For my personal use case, that involves photos and documents, all things i cannot easily recreate (photos less so). Those are what matters to me, and storing them in the cloud means i not only get redundancy in a single data center, but also geographical redundancy as many cloud providers will use erasure coding to make your data available across multiple data centers.
Everything else will be just fine on a single drive, even a USB drive, as everything that originated on the internet can most likely be found there again. This is especially true for media (purchased, or naval aquisition). Media is probably the most replicated data on the planet, possibly only behind the bible and IKEA catalog.
So, back to the important data, i can easily fit an entire family of 4 into a single 2TB data plan. That costs me somewhere around €85 - €100 per year, for 4 people, and it works no matter what i do. I no longer need to drag a laptop with me on vacation, and i can basically just say "fuck it" and go on vacation for 2 weeks.
> everything that originated on the internet can most likely be found there again
I would that this were true. I guess it depends on what you mean by "the internet", but there's a reason the Internet Archive exists. Sure, you don't need to back up your recent Firefox installer or your Debian ISO but lots of important and valuable data can't be found on the internet anymore. There are very valid reasons that groups like Archiveteam [1] do what they do, not to mention recent headlines like individuals losing access to their entire cloud storage [2].
If you need to commute to work daily, and you're concerned about the cost, you don't really care if you're comparing a city car vs a sports car vs the bus, despite on goes at 80km/h, and another can do 230km/h, if all you're interested in is the price.
Obviously as your storage needs increase, so will cloud costs, but unless you're a professional photographer, I'm guessing 2TB will be more than enough for most people.
Again, not talking about people trying to run their own media server on pirated content, and saving money that way. In my book that's comparable to saving money by robbing a bank. You're not saving anything, you're breaking the law, and 9 out of 10 times, it's cheaper to steal someone else's bike than it is taking a taxi home.
I'm talking actual storage for data you actually own, and possibly even data you have created yourself. Anything that came from the internet can be found on the internet again, purchased or naval acquisition.
The price and effort is practically irrelevant. My homelab is mine, local, and answers only to myself and a wall outlet. Also, where I live, the internet is simply not dependable enough to consider otherwise.
OP has admittedly over-engineered their setup. Depending on your goals (cost, speed, space, autonomy), there are less-rigorous solutions for the layperson.
I, for one, don't want to have Google, etc. as a dependency[1], so I will pay some energy cost to do that.
> Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you.
Agreed, but it doesn't have to take time from your family. I'm on a small team that self-hosts internal services to lower costs/risks. It takes very little time to maintain, and maintenance windows happen on our terms. Our uptime this year is better than "Github Actions", the latency is incredible, and we've had no known security issues.
There are two keys to doing this successfully: (1) don't deploy anything you don't understand (so it won't take you long to fix), and (2) even then, aggressively avoid complexity (so it doesn't break in the first place.)
For example, despite significant network expertise, we stuck to a basic COTS router and a simple IPv4 subnet for our servers. And the services we run are typically self-contained golang binaries that we can deploy with bash onto baremetal. No docker, kvm, ansible, or k8s.
This DIY setup saves us considerably more than it costs. Not for everyone, but with proper scoping, many readers of hacker news could pull this off without losing time with their loved ones.
Sure, run a homelab as a hobby. Everybody has hobbies.
Once your user count goes beyond 1, you suddenly have a SLA as people are dependent on your services. Like it or not, you are now the primary support staff of your local cloud business.
The more users you get, the more time you will need to spend to fix problems.
At which point does it go from a hobby to a 2nd job ?
You're still arguing from the point of view of someone who doesn't want to do it or isn't interested in doing it. Your GP said you 'get' homelabs but it's increasingly clear you do not - and that's ok. People run homelabs because they enjoy learning and tinkering. If they don't enjoy it, or they can't risk having the odd problem, they have other options they can explore. It's not really any more complicated than that. Believe it or not, people are capable of evaluating the tradeoffs and making a sensible decision about what to host themselves.
Yeah, also neither it wants to move out because you pushed start button too many times or stay sullen over weekend because dinner plan on Friday got canceled.
Stop with this infantilising crap. People can have both rewarding relationships and pastimes. Just because someone likes configuring software does not suggest they neglect their relationships or have delusions that need correction about the value of things in their life.
Encrypted data on the servers is only useful if your server is just dumb storage. I want the server to actually do something, e.g. serving media, running home assistant etc.
Alternately, most computer vendors actively interfere with your independence and force you into the cloud in various ways with your computers and phones.
> But my server could be shut down because of a power outage or another reason. I might be at work or even on holidays when it happens, and even wireguard can’t solve this.
A 'power outage' incident doesn't seem to have been mitigated. My homelab has had evolving mitigations: I cut a hole in the side of a small UPS so I could connect it to a larger (car) battery for longer uptime, which got replaced by a dedicated inverter/charger/transfer-switch attached to a big-ass AGM caravan battery (which on a couple of occasions powered through two-to-three hour power outages), and has now been replaced with these recent LiFePo4 battery power station thingies.
Of course, it's only a homelab, there's nothing critically important that I'm hosting, but that's not the dang point, I want to beat most of "the things", and I don't like having to check that everything has rebooted properly after a minor power fluctuation (I have a few things that mount remote file stores and these mounts usually fail upon boot due to the speed at which certain devices boot up - and I've decided not to solve that yet).
For anyone thinking of doing this, please please don't. A car battery is probably never a sealed deep cycle battery, and the UPS's charging circuitry is not intended to charge a battery of this size (this is assuming you're using a lead based battery, and not something even more crazy and dangerous like Li-Po or LiFePO4). God forbid you have a cell fail on a car battery and that charger starts cooking the battery. I've had actual car lead acid batteries explode because of poor choices someone else made trying to do something like this, and man when they go, they're dangerous and scary. You really need to pick hardware that's all properly specced and sized for the job...there's a reason Eaton and APC charge what they do.
The better approach (if you have EE skills) is to build your own UPS with LiFePO4 batteries, a proper BMS, and a bunch of USB-C PD ports. Skip the lossy inverter entirely and pick equipment that runs on USB-C PD directly.
Or buy equipment targeting alirack compatibility, i.e., with 240V PSUs that are minimal effort designed to also specifically work on the appropriate DC voltage so that Alibaba, the alirack standard's originator, could delete the inverter from their UPS and feed PV MPPTs directly into the batteries.
To each their own. I'd personally sleep far more soundly with even a car battery UPS under my bed than with one of those consumer ready lithium ion portable power station batteries they sell on Amazon.
But if you can't explain the difference between voltage and current, or know what "short circuit" means, then this isn't something to poke at.
> I have a few things that mount remote file stores and these mounts usually fail upon boot due to the speed at which certain devices boot up - and I've decided not to solve that yet
If your OS is using systemd, you can fix that pretty easily by adding an After=network-online.target (so the ExecStart doesn't even try to check if there is no networking yet) and an ExecCondition shell script [1] to actually check if nfs / smb on the target host is alive as an override to the fs mounts.
Add a bunch of BindsTo overrides to the mounts and the services that need the data, and you have yourself a way to stop the services automatically when the filesystem goes away.
I've long been in the systemd hater camp, but honestly, not having to wrangle with once-a-minute cronjobs to check for issues is actually worth it.
Even then, that doesn't resolve the power outage primarily because the local nodes that your Modem transmits to will also be down in the area from same said power outage, most only have backup power for 10-30m, if even that many say there is no backup power for those which is why they disclaim that in their service agreement with regards to emergency phone calling over voip (for services that include unified communications).
So even if your local node could transmit, none of the others could, and they can't buffer either.
To mitigate power outage, you would need both power, and a cellular connection, and that connection would only be good for 2-3 hours (Cell tower backups), and those would require a Cradlepoint.
Author here, indeed I didn't install a UPS. I've tried to keep my setup fairly minimal, and I'm consciously accepting that if there's a power outage my services will be down. I self-host exclusively for myself, not for others.
What I don't want though is a power outage putting my server offline while I'm on holidays, and not be able to access my services at all.
My ISP-provided router supports Wireguard, so I can use that to connect to my KVM and send the Wake on LAN packages.
Out of curiosity, did you look through the BIOS options on your computer? Mine is much less capable than yours (it is a used mini-pc) but it has options to boot itself up upon resuming power.
I use UPS for my internet and then remote access such as intel AMT will get you back into your systems if you've specced your hardware to have such features.
of course not, but you would vpn in, your router (or server behind your router but that may be more risky being able to get in if something fails) should run a vpn if you plan to use something like AMT.
Same. I have two small UPS. The first I connect to my computers and it lasts about 15 minutes. Enough for me to save/checkpoint whatever in a restorable way. The second I connect to the wifi router. This lasts for a while longer and it's pretty useful. In those 15 minutes local network still works. And if the power comes back in a minute or two (which happens more often than outages that last hours where I am), then I don't have to wait for the wifi routers painfully slow boot time.
Speaking of... does anyone know how to speed up wifi router boot time? Stupid thing takes 5 minutes almost.
> Speaking of... does anyone know how to speed up wifi router boot time? Stupid thing takes 5 minutes almost.
This is probably due to the access point having minimal hardware for the task, and it's startup not being particularly well optimised, so "buy a better AP/router" use likely the most practical answer.
As an alternative, you could buy a small device (perhaps a recent rPi model) with more umph (or add this task to an existing machine in your home lab setup), give it a wireless NIC if it doesn't already have one, and run hostapd to turn it into an AP. That might startup a lot faster.
I would be happy with smaller/lighter UPS that will only provide 10 seconds of juice. I only purchased the thing because the lights in my apartment would flicker a couple of times a week.
I’ve spend a decent amount on EcoFlow units (Delta 2) that I use as online UPS for my servers and networking. They work great and I also recently installed dual 220watt EcoFlow solar panels on my roof that pump in solar during the day. Works nicely though the ROI admittedly is not there at all, just a cool thing.
I think that says it all. It's gone beyond practicality for me, and I'm OK with that. I'm also satisfied with the current setup; I don't need to spend more.
I have a couple of Ecoflow's and Bluetti, and a Segway LFP battery. They all work fine so far.
> I cut a hole in the side of a small UPS so I could connect it to a larger (car) battery for longer uptime
Can you share more about this? I have a APC Back UPS PRO USV 1500VA (BR1500G-GR) and it would be nice to know if this is possible with that one as well.
That UPS eventually died, and I'm not sure if it was because it was hooked up to a larger battery than it was designed for, but it's still only 12 volts so I don't think the electronics would notice. What they may notice is extended run-time in the event of a power failure.
It was a crude mod. Take the cover off and remove the existing little security alarm battery, use tin snips to cut a hole in the side of the metal UPS cover (this was challenging, it was relatively thick metal, I'd recommend using an angle grinder in an appropriately safe environment far away from the internals of the UPS), and feed the battery cables out through the hole. I probably got some additional cables with appropriately sized terminations to effectively extend the short existing ones (since they were only designed to be used within the device). And then connect it up to a car battery.
Cover any exposed metal on the connectors with that shrink rubber tubing or electrical tape. Be very careful with exposed metal around it anywhere, especially touching the RED POSITIVE pole of the battery. Get a battery box - I got one for the big-ass AGM battery.
Test it out on a laptop that's had it's battery removed or disconnected that, just in case, you don't care too much about losing.
Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
Personally, I think it's safer a less hassle to go for a LiFePo4 (LFP) Power Station style device that has UPS capabilities. LFP batteries have 3,000-ish cycle lifetimes, which could be nearly ten years with daily use.
> use tin snips to cut a hole in the side of the metal UPS cover (this was challenging, it was relatively thick metal, I'd recommend using an angle grinder in an appropriately safe environment far away from the internals of the UPS)
Why not just drill a hole? Drill bits large enough to drill a hole for 120A cables exist.
> Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
If you're going this route I'd recommend a marine battery. Car batteries don't handle deep cycles well, and, TBH, UPS chargers aren't designed for failed car batteries (nor marine batteries) and can possibly cause an explosion in the lead-acid battery has a few dead cells.
No, don't do it. I understand his thought process because they are both 12v batteries with more capacity, but car batteries are made for high burst of energy which a car engine ignition requires, whereas UPS batteries are made for slow drains. Also, these UPS are made for charging battery cells in a certain way, if you start to stack a bank of batteries of the same model in parallel hoping for more capacity, even then its a problem for the UPS's charger, they won't charge evenly and eventually becoming a problem.
I like to keep my hardware competence sufficiently low so that I’m never cursed with the false confidence to even consider “drilling a hole in a UPS,” nevermind wiring it to a car battery in my closet…
I will mess with all kinds of hardware, especially mini PCs and routers.. I once had a few hundred iPhones in my closet… but I draw the line at anything that uses batteries or electricity in a non-standard way. If the wire can’t carry data, I’m not touching it.
Maybe it’s because when I was a kid, I fancied myself an experimenter, and I had a wire ripped off a lamp, and touched the two ends together…
It's a bit trickier than you think. And can be dangerous.
The discharging circuitry is fine, but the _charger_ might overheat because a larger battery can draw more current while charging for longer periods. I discovered that when I tried to attach a "lead-acid compatible" LFP battery to an UPS.
These days, it's just easier to buy a dedicated rack-mountable LFP battery originally meant for solar installations, an inverter/charger controller, and a rectifier. The rectifier output will serve as a "solar panel" input for the battery. You get a double-conversion UPS with days-long holdover time for a fraction of a lead-acid UPS.
My commercial UPS already scares me at the fire potential. No way I would take on the risk of some DIY on something that could burn down the place or electrocute me.
There's not much to it, you just take the small 12V sealed lead-acid cell out from the bottom of the UPS, extend the two leads, and connect a larger capacity lead-acid battery of the same voltage.
If you don't recognize the terms "sealed", "lead-acid", "battery", "capacity", or "voltage" then you shouldn't do this.
About the only advantage of it is that it's cheap (free if you find a UPS in the trash with an already dead battery), but those cheap UPSs make really crap quality power, and for some of them the only reason they don't overheat is because their stock battery is so small. It's a bit like how you can cook a whole turkey in the microwave, but you probably don't want to.
I really don't get why people like the Minisforum stuff over alternatives. I've unfortunately been given one, and honestly I'm really unimpressed between crap firmware, no real expandability and just all of the other compromises that come with buying Aliexpress hardware. For the same money you can either pick up a used entry model Dell/HP/Lenovo server (and if they're E3/W/other entry level Xeon, they're usually not terrible on power) or get a good ATX chassis and power it with some off lease Supermicro hardware. Then you don't need to compromise on things like OOB management, hot swap bays, a real SAS card, real 10G nics, ECC ram, etc. etc. Maybe people are just afraid of doing a little bit of putting hardware together? I've seen and have systems that use the above gear that have been going for well over a decade now, with basically no hiccups, and even the old Sandy Bridge era E3 stuff both punches above probably even RPi5 and N100 and doesn't draw more than 30-40W when you don't have spinning disks in there. I'm sure if you avoid AMD and go find a newer T variant Intel chip, you can both have your cake and eat it.
N100 is faster and more efficienct than any Ivy bridge E3. At idle the Xeon draws roughly 20W more, which works out to $30USD/year at the national average electricity prices. That gap widens as the load increases.
I can totally see why someone who doesnt need expandability would choose the cheap mini PC.
When I first got into homelabbing as a hobby, I built a massively overpowered server because I was highly ambitious.it mostly just drew power for projects that didn’t require all the horsepower.
A decade later, I like NUCs and Pis and the like because they’re tiny,low-power, and easy to hide. Then again, I don’t have nearly as much time and drive for offhand projects as I get older, so who knows what a younger me would have decided with the contemporary landscape of hardware available to us today.
Even my old GTX 970 can throttle down to like 10W while still being able to display and iirc also h.264 decode 1080p60, let alone putting it in a mode that at all matches S3/suspend-to-ram via PCIe sleep states.
I'm pretty sure laptop with extra dGPUs normalized aggressive sleep of the power gating kind for their GPUs to keep their impact on battery life negligible (beyond their weight otherwise being used for more battery) until you turn on an application that you set to run on the dGPU.
I just purchased a Minisforum mobo BD795i SE with a Ryzen 9 7945HX (16 core, 32 thread). Can’t beat the price to performance. Building a NAS / VM server with 5x 14TB Seagate Exos drives, 2TB NVME, 500GB boot SSD, and 96GB of DDR5 memory. I was able to buy all components including a 3u hotswap 5x drive caddy for less than $1,200 all in. Can’t really beat that.
For appliance-like quickly replaceable little servers like my firewall or other one off roles, they are ok for me, but to run my TrueNAS system (ZFS) I gotta run something with a Supermicro board and ECC.
Mission critical workloads that need 24/7 uptime (homelab general purpose always-on server)!
I used to really like the minis but I had to basically e-waste two of them because the ethernet went bad (lightning strike i think) and there was really no way to replace them and the OS would crash from hardware issues from it.
FWIW if this keeps happening to you, you can get ethernet surge protectors. Or use a couple cheap media converters from copper to a fiber back to copper.
That's just an artifact of Intel disabling ECC on consumer processors.
There's no reason for ECC to have significantly higher power consumption. It's just an additional memory chip per stick and a tiny bit of additional logic on CPU side to calculate ECC.
If power consumption is the target, ECC is not a problem. I know firsthand that even old Xeon D servers can hit 25W full system idle. On AMD side 4850G has 8 cores and can hit sub 25W full system idle as well.
If anyone is looking to get started with a homelab at a good price, I can highly recommend checking ebay for a Dell Wyse 5070. They flooded the market for $50 and are likely powerful enough for many needs. They have a M.2 slot that support SATA. The 'extended' version also has space for a small pcie card and has a parallel and 2 serial ports for a blast to the past.
I built mine around an N150 board off of aliexpress. 6 SATA slots, 4x2.5G ethernet, 2x m.2 slots. Find a cheap second-hand case, a bit of RAM and you're ready to go. It's got hardware transcoding, handles 4K without breaking a sweat. And it consumes 6W!
I'm more inclined to go with N305/N355 myself for the extra compute (more images/containers). But they're a pretty decent option. I setup a "forbidden router" at a friend's using one. Been working great for his home use... proxmox, opnsense for routing, wireguard, pihole, docker-vm running his AP control software, and a trunas scale VM serving a USB hard drive for home backups.
At home, I'm using a 5900H based mini-pc I bought a few years ago and a synology nas.
For a bit more money, Optiplex Micro / Lenovo Tiny / HP Mini series with at least 8th gen i5 are a good option too. Can be found from Ebay for about 70 - 120 USD, much more powerful than Wyse 5070 while still quite power efficient (about 10W idle, as opposed to 5W of Wyse). Usually they come with one NVME, one SATA 2.5" slot, some premium models even with PCIE.
All good options, just noting that based on some searches I made they all seem to lack serial ports compared to the Wyse if that is something you care about (I personally do). There could be variants out there though with serial ports and if would be happy to hear about them and even more happy if there are fanless variants/alternatives for those of us with very limited space at home and a need to avoid noise.
I don't know about fanless, especially with an i5 as opposed to n-something.
But not all those minis are the same. G4 (intel 8th gen) and G5 (intel 9th gen) HPs are horrendous. The fan makes an extremely aggravating noise, and I haven't found a way to fix it. Bonus points for both the fan and heatsink having custom mounts, so even if you wanted to have an ugly but quiet machine by slapping a standard cooler, you couldn't.
G6 versions (intel 10th gen) seem to have fixed this, and they're mostly inaudible on a desk, unless you're compiling something for half an hour.
I have a somewhat bigger machine that hosts my homelab, an HP 800 G2 SFF. It takes "normal" components, so can ben modified. The only custom thing is the PSU, but the standard one is good enough for my needs. Bonus points for not requiring an external power adaptor.
It has an i5-6500, 32 GB RAM (16 + 2x8 DIMMs), 2 SATA SSDs and a 2x10Gb Connect-X3. It runs 24/7 hosting OpnSense and HomeAssistant on top of KVM (Arch Linux Hardened – didn't do anything specific to lower the power draw). Sometimes other stuff, but not right now.
I haven't measured it with this specific nic, but before it had a 4x1Gb i350. With all ports up, all VMs running but not doing much, some power meter I got off Amazon said it pulled a little over 14W. The peak was around 40 when booting up.
Electricity costs 0.22 €/kWh here. The machine itself cost me 0 (they were going to throw it out at work), 35 for the nic and maybe 50 for the RAM. It would take multiple years to break even by buying one of these small machines. My plan is to wait out until they start having 10 Gb nics and this machine won't be able to keep up anymore.
Quick search online tells me ~5W for the Dell Wyse 5070, which does not sound unrealistic as I have similar boxes that draw ~10W. So, 32 to 62kWh per year and then we have ~USD 6.5 to 13 per year assuming 20 cents per kWh which another online search told me was reasonably realistic for the US.
Another cheap option are Fujitsu Futro's. They're meant to be thin server clients meant to operate larger server racks (I guess?). Anyway, they come with 4-8GB RAM, an SSD (most people upgrade them), and even have a PCI-e slot (depending on the model) to use with a 2.5 or 10 Gbit ethernet card, for example. Around $50 on ebay.
local dns, static site hosting, local apt cache, various other network services (unifi controller if you've got those APs for example), remote/headless dev machine (maybe not for kernel or bigcorp java development), or whatever else you want. mail if you want. Anything :)
Those little thin clients aren't gonna be fast doing "big" things, but serving up a few dns packets or whatever to your local network is easy work and pretty useful.
Even these low-power CPU's are surprisingly capable. As an example of more fancy thing, one could slap in some external storage, install Jellyfin, and run their own local streaming service off such a machine. The CPU is modern enough for efficient hardware transcoding of a stream.
Looking at Raspberry Pi prices inside EU, I can get 8 core laptops for a cheaper price, with display, dGPU et al.
No idea what happened, but Raspberry Pis are super expensive for the last couple years, which is why I decided to just go with used Intel NUCs instead. They cost around 80-150EUR and they use more electricity but they are a quite good bang for the buck, and some variants also have 3x HDMI or Gbit/s ethernet or m2 slots you can use to have a SATA RAID in them.
Same.. switched over during the pandemic when full on N95/100 systems were cheaper than just the RPi board by itself. More compute/ram, faster storage, included case and power, fewer headaches.
No. The main pain point with RPi's is that they're SD card based – which are slow and prone to failure. Configuring an SSD to be used as the main storage has also been a pain in the past (not sure if that's changed recently).
With an n100, you get a better, more upgradable system for around the same price and same power usage. On top, you will also have an x64 system that isn't limited to some ARM quirks. I made the switch n100's over a year ago and have had no issues with them so far.
Another tip: second hand gaming PCs! They can be incredibly cheap and powerful due to upgrade cycles just make sure to put a raid 1 on it as second hand gamer gear might be less reliable.
This is cool - I have a similar home lab on a Mac Mini [1].
The encryption question is interesting. I don't have disk encryption turned on, because I want the computer to recover from power failure. If power turns off then on, the server would be offline until I decrypt it.
How does your "Wake on LAN" work with the encryption?
Author here, indeed I bought a JetKVM. A colleague had one and recommended it warmly! I'm Very happy with it so far, but my usage has been rather basic.
I've heard that it might be difficult to get one in the US though.
Building a homelab is an awesome way to learn a lot of things.
I also used to over-engineer my homelab, but I recently took a more simplistic approach (https://www.cyprien.io/posts/homelab/), even though it’s probably still over-engineered for most people.
I realized that I already do too much of this in my day job, so I don’t want to do it at home anymore.
Why install proxmox on top of debian? Proxmox distributes an iso that basically does the same as you with preseeding.
Although recently had to install proxmox as an apt pkg on top of debian happened because the proxmox iso wouldn’t install properly. That actually happened twice. I think I’ll just install debian from now on…
As an example, I use cloudflare tunnel to point to an nginx that reverse proxies all the services, but I could just as well point DNS to that nginx and it would still work. I had to rebuild the entire thing on my home server when I found that the cheap VPS I was using was super over-provisioned ($2/mo for 2 Ryzen 7950 cores? Of course it was) and I had this thing at home anyway, and this served me well for that use-case.
When I rebuilt it, I was able to get it running pretty quickly and each piece could be incrementally done: i.e. I could run without cloudflare tunnel and then add it to the mix, I could run without R2 and then switch file storage to R2 because I used FUSE s3fs to mount R2, so on and so forth.
Your VPS provider likely uses servers with ECC RAM, this home server doesn't. For most people it doesn't seem to matter but for me it does - a home server where I store my data needs to have ECC RAM.
ECC RAM protects against bit flips (a bit changing to the wrong state). These can be caused by electric or magnetic interference. A pixel of a picture suddenly having the wrong color because of a bit flip is not that bad but some day an important file might end up corrupted because of one. I want to sleep well at night not having to worry about silent data corruption so ECC RAM it is. See here for more information: https://en.wikipedia.org/wiki/ECC_memory
True! Unfortunately, an enterprise machine is more likely to have considerable power draw and quite possibly be much louder. I have a 2013 Apple Mac Pro (trashcan) that uses ECC. They're also cheap, small, and quiet.
This was exactly a use case I had in mind when building https://canine.sh -- also uses k3s as a provider, and provides a Heroku-like devex.
How to actually reliably expose a homelab to the broader internet is a little tricky, cloudflare tunnels mostly does the trick but can only expose one port at a time, so the set up is somewhat annoying
I've got basically raw internet coming in to my OPNSense device, although I had to request certain ports to be removed from the ISP's by-default-blocked policy, since I host a mail server - but the ISP is fine with this, they have a form for it, super easy.
Some family members are behind CGNAT, and I'm not sure if their ISP has the option to move out from behind that, but since they don't self-host it's probably slightly more secure from outside probes. We're still able to privately share communications via my VPN hub to which they connect, which allows me to remotely troubleshoot minor issues.
I haven't looked into cloudflare tunnels, but haven't felt the need.
The encryption is interesting but I wouldn’t call this over engineered at all, in fact it’s rather basic compared to a lot of homelabs I see people building particularly where people are doing K8s or similar over multiple machines.
Also Proxmox was called out as the only choice when that is very much not the case. It is a good choice for sure, but there are others.
I love infrastructure. I run my own services at home too, and have for many years. But to be honest, the older I get the less fun it is to deal with issues that come up from time to time.
At some point you'll need to upgrade hardware and software, you get to do the exercise over again. There will always be lessons learned and its get better each time. Its still work.
This is such a great post. I have a small collection of posts for inspiration in creating my homelab and this is getting added to it. Current have a Pi 4 with PiHole and a Beelink. Going to add one or two more machines.
After some research, it seems much easier to just back up the Proxmox config (and VM disk images, if they're needed) than to define or deploy Proxmox VMs with OpenTofu or ansible.
I did the exact opposite. And by that I mean physically moved my homelab into their colo earlier this year. Runs like a charm, costs about 500€ per month total.
Sounds like a lot, but I was almost paying the same before - 220€ for power at home, 110€ for a dedicated Hetzner server, 95€ for a secondary internet connection (as not to interfere with the main uplink used for home office by my partner and me).
Not having to deal with the extra heat, noise and used up space at home anymore has been worth it as well.
My storage needs were increasing by the day. Electricity is now a small monthly cost. I have more cores and ram than ever, and can easily expand it. Main machine now runs with 1TB ram and 15TB SSD and other has more than 384G ram. I currently use 3TB ssd storage, and get way more performnce than Hetzner's VMs with ceph ssd disks. I do need redundancy, but it's not something hetzner was giving me anyway, and if my anectode is not a mess up, i actually got database corruption on hetzner that never happened on my own local setup.
I'd have colo'ed or used dedicated as it's definitely better than their VMs, but they don't have that in their US datacenters.
I am pretty happy with my current setup, I have significantly less down time (few mins a month) than when I was on hetzner - but this is mostly due to my need for more ram at times.
I also used this as an excuse to get 56G mellanox fiber switch and get poe cameras etc in a full homelab manner, so it's been fun, on top of being cheaper. Noise is not a concern, I got a sound-proof server rack that's pretty nice. It takes up space, but i have kids, so my garage is near full at times anyways :)
I've got a single server and a /28 IP block on OVH for public facing stuff. Mostly because it's cheaper than the bump from my "home" internet to "business" to be able to use common server ports (blocked on the home service).
I ran a home lab for a number of years. This was a fairly extensive set up - 4 rack mount servers, UPS, ethernet switch etc with LTO backups. Did streaming, email and file storage for the whole family as well as my own experiments.
One morning I woke up to a dead node. The DMZ service node. I found this out because my wife had no internet. It was running the NAT and email too. Some swapping of power supplies later and I found the whole thing was a complete brick. Board gone. It's 07:45 and my wife can't check her schedule and I'm still trying to get 3 kids out of the door.
At that point I realised I'd fucked up by running a home lab. I didn't have the time or redundancy to have anyone rely on me.
I dug the ISP's provided WiFi router out, plugged it in and configured it quickly and got her laptop and phone working on it. Her email was down but she could check calendar etc (on icloud). By the end of the day I'd moved all the family email off to fastmail and fixed everything to talk to the ISP router properly. I spent the next year cleaning up the shit that was on those servers and found out that between us we only had about 300 gig of data worth keeping which was distributed out to individual macbooks and everyone is responsible for backing their own stuff up (time machine makes this easy). Eventually email was moved to icloud as well when domains came along.
I burned 7TB of crap, sold all the kit and never ran a home lab again. Then I realised I didn't have to pay for the energy, the hardware or expend the time running it. There are no total outages and no problems if there's a power failure. The backups are simple, cheap and reliable. I don't even have a NAS now - I just bought everyone some Samsung T7 shield disks.
I have a huge weight off my shoulders and more free time and money. I didn't learn anything I wouldn't have learned at work anyway.
I can relate to this. I still run my own x86 box as a router to have the AP controller, but I'm strongly considering dropping this.
I need to update it and patch it, hoping nothing goes wrong in the process. If something breaks I'm the only one that can repair it, and I really don't want to hear my wife screaming at me at 7am when I wake up.
Eventually I came to your same conclusion, but I still run a hybrid setup that allows me to keep the router (for now), and a NAS for backup (3-2-1) and some local services. I run a dedicated server from Hetzner for "always on" services, so that the hardware, power redundancy and operational toil are offloaded. I gave up long ago on email: any hosting service will be way better than me doing it - I know I can do it, but is it worth my sanity? Nope.
While i get the whole homelab thing is exiting and a great learning experience, it's simply not worth the time and effort for the majority of people.
You will end up paying much more for your services, along with spending a ton of time maintaining it (and if you don't, you will probably find yourself on the end of a 0-day hack sometime).
In Northern/Western Europe, where power costs around €0.3/kWh on average, just the power consumption of a simple 4 bay NAS will cost you almost as much as buying Google Drive / OneDrive / iCloud / Dropbox / Jottacloud / Whatever.
A simple Synology 4 bay NAS like a DS923+ with 4 x 4TB Seagate Ironwolf drives will use between 150 kWh and 300 kWh per year (100% idle vs 100% active, so somewhere in between), which will cost you between €45 and €90 per year, and that's power alone. Factoring in the cost of the hardware will probably double that (over a 5 year period).
It's cheaper (and easier) to use public cloud, and then use something like Cryptomator (https://cryptomator.org/) to encrypt data before uploading it. That way you get the best of both worlds, privacy without any of the sysadm tasks.
Edit: I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you. Eventually those people won't be there anymore, and the memories you make with those people will matter far more to you in 20 years, than the €20/month you paid for cloud services.
Have you seen the prices for Google Drive et al? The NAS setup you describe (which I wouldn't consider worth the money for that little space) is what, 12GB with 1 parity drive?
Google One for 10TB is 274,99€/mo (at least in my country) so you'd make the entire nas price and subscription cost within a few months, let alone years.
There just aren't compelling public cloud for large sizes (My NAS is 30TB capacity and I'm using 18 right now) and even if you go the more complex loops with like S3 and whatnot you still get billed more than it's worth. Public cloud is meant for public files, there's a lot of costs you're paying for stuff you don't need like being fast to access from everywhere.
The maintenance time is a bit overestimated if you keep it simple.
On my homelab, I update everything every quarter and it takes about 1 hour, so 4 hours a year is pretty reasonable. Docker helps a lot with this.
And I’ve almost never run into trouble in years, so I have very few unexpected maintenance tasks.
EDIT: I am referring to a homelab that is only accessible for private purposes through a VPN.
As a bare minimum, you should update your server and docker images daily, or at least whenever there's an update (which you won't know unless you check).
If you only access your homelab over VPN or similar, then by all means, update whenever you feel like it, but if you expose your services to the internet, you want to be damned sure there are no vulnerabilities in them.
The internet of today is not like it was 20 years ago. Today you're constantly being hammerede by bots that scan every single IPv4 address for open ports, and when they find something they record it in a database, along with information on what's running on that port (provided that information is available).
When (not if) a vulnerability for a given service is discovered, an attacker doesn't need to "hunt & peck" for vulnerable hosts, they already have that information in a database, and all they need to do is start shooting at their list of hosts.
You can use something like shodan.io to see what a would be attacker might see (can check your own IP with "ip:xxx.xxx.xxx.xx".
Try entering something like Synology, Proxmox, Truenas, Unraid, Jellyfin, Plex, Emby, or any of the other popular home services.
Sorry, I should have mentioned that my services are only accessible through a VPN. Otherwise, I completely agree with you.
> As a bare minimum, you should update your server and docker images daily, or at least whenever there's an update (which you won't know unless you check).
I got this setup automatically with Renovate: https://github.com/shepherdjerred/homelab/blob/main/src/cdk8...
It's pretty easy to soft expose yourself too now with things like cloudflare tunnels without a lot of the security risks. You can put all access behind an secret/API key or OAuth login easily.
I definitely need to get my security hygiene up to snuff, but let me ask you, since using a reverse proxy (caddy in my case) refuses connections without a domain, would the scans reveal anything about my host if they don’t know the URL of my jellyfin instance?
> which you won't know unless you check
RSS feeds FTW
Who maintains the VPN?
If it were me doing this, either Zerotier or Tailscale. They aren't strictly VPN's in a traditional sense, but they largely achieve the same ends, and Zerotier's been much more flexible and performant than anything else I've ever tried.
But on a homelab you can host any service you want and start/stop it whenever you need to. Sure cloud storage might cost less in the short-term, but if you need more storage or more services, a selfhosted option is far cheaper.
VPS are very expensive for what you get. If you have the capital, doing it yourself saves you money very quickly. It's not rare to pay $50 for a semi-decent VPS, but for $2000 you would get an absolute beast that would last 10 years at the very least.
With Docker, maintenance is basically zero and unused services are stopped or restarted with 1 command.
How many services do you need ? And how much CPU power do those services need ?
I've also self hosted for decades, but it turns out i don't really need that much, at least not public.
I basically just need mail, calendar, file storage and DNS adblocking. I can get mail/calendar/file storage with pretty much any cloud provider (and no, there is no privacy when it comes to mail, there is always another participant in the conversation), and for €18/year i can get something like NextDNS, Control D, or similar.
For reference, a Raspberry Pi 4 or 5 will use around 50 kWh per year, which (again in europe) will translate to €15/year. For just €3 more per year i get redundancy and support.
I still run a bunch of stuff at home, but nothing is opened to the public internet. Everything runs behind a Wireguard VPN, and i have zero redundant storage. My home storage is used for backing up cloud storage, as well as storing media and client backups. And yes, i also have another cloud where i backup to.
My total cloud bill is around €20-€25/month, with 8TB of storage, ad blocking DNS, mail/calendar/office apps and even a small VPS.
I did not do the price calculations (in France, and I prefer not to know :) but I host many things except for mail and calendar (mail is tricky to host). Of these 29 services, I use maybe 4 daily and 15 monthly. They are well protected, easy to maintain, and serve family and friends.
Not to mention that I love them.
The others' points are valid. Google Drive is rather expensive, Hetzner is cheaper and works well enough.
However, it also depends on how you use that data. In my case, I'm a Sunday photographer, so I tend to wrangle multiple GB of data at a time. I usually edit my photos locally, but I sometimes will want to revisit older stuff. I can download it, but it's a PITA and s_l_o_w. Google drive file stream is terrible for this, you never know if the files are uplaoded or not. Onedrive isn't any better. I haven't tried dropbox.
Hetzner has some storage box which exposes SMB but doesn't seem to enforce encryption nor IP filtering, so I'm not very comfortable with that.
Also, my internet connection pretends I have 5 Gb down, 0.5 up. The down part is usually as expected (my machine only has a 1 Gb nic), but upload is sometimes very slow. Running a local NAS is much faster. It's ZFS, so backups are trivial to send to encrypted offsite storage.
It also doesn't need to run 24/7, which helps with power usage (0.22 €/kWh here).
> I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Indeed. Waiting around for files to transfer gets old quick. I have better things to do with my time. My NAS needs a whopping five minutes of my time every now and again when a new kernel comes out.
Once you know how to build and maintain infrastructure you realize that while it's nice to know how to do it, it's not cost-effective.
The thing is, it's worth it to learn. Do you know the basics of how to set up a completely redundant environment? There's no conceptual difference between setting one up at home by using consumer equipment and setting it up in a data center. You can get pretty capable equipment (Mikrotik) for less. The enterprise stuff has more configuration options, but it's doubtful that you'll use most of them.
Set up backup WANs, redundant routing, DNS, power, etc is fun. Setting up redundant load balancers, backend services, databases, etc is also fun. It's not hard to do, it's just hard to get it all right. There are probably a zillion configuration parameters you can mess with, and only a few sets actually work. Unfortunately, the sets that work in your home won't be the ones that will work in production, but you could possibly run load tests etc to simulate a real environment (though simulating multiple clients from multiple endpoints is harder than you think).
And of course, getting production equipment is hard. Nobody has 2 F5s lying around. And you really need at least 4 F5s, because you have redundant locations. That's a lot of cashola. And in most environments you wouldn't want some random person messing around with the production (or test) F5s. It's the same with NAS, VM servers, docker registries, etc.
I suspect getting the whole end-to-end setup isn't something people experience anymore, because small companies have (or at least should have) moved to the cloud by now.
Not everything that seems "interesting" is worth it to learn from an economic perspective. Could it be worthwhile for someone studying for the A+ Computer Technician test? Maybe. Could it be worth it for someone looking to impress their boss Harry? Possibly, if Harry also controls your pay and has a penchant for overpaying for locally run infrastructure and a distrust of the cloud. Possibly. These kinds of investments are based evaluated at the individual level --- not everyone will benefit. Some may find themselves no more competitive in the job market as your average IT clown, but as always, results will vary.
Learning things because they are interesting is reason enough in itself for many people, regardless of any economic benefit.
Google Premium storage, is $100/yr (paid annually) for 2TB of storage.
Even at the high end estimate the homelab is giving you several times the storage for the same cost.
do you need more space than 2TB ? (excluding things you've downloaded from the internet)
Very few people i know has use for that much storage. Yes, you can download the entire netflix catalog, and that will of course require more storage, and no, you probably shouldn't put it in the cloud (or even back it up, or use redundant storage for it).
Setting up your own homelab to be your own netflix, but using pirated content, is not really a use case i would consider. I'm aware people are doing it, and i still think it's stupid. They're saving money by "stealing" (or at least breaking laws), which is hardly a lifehack.
I know many people that would fill that space with home videos from phones and digital cameras. Millennials with kids especially.
You can fit A LOT of photos and videos in 2TB.
My wife is a professional photographer, and while we do archive most of her RAW files somewhere else, pretty much everything HEIC, JPEG or any other compressed format lives in our main cloud.
We have 2.2TB in total for “direct storage”, and we’re currently using around 1.5TB, and that’s including myself and our kids.
My personal photo library has just short of 90,000 photos, and about 5,000 videos. My wife’s library is roughly twice that. I have no idea how many photos the kids have, but they each take up around 200GB for photos.
And then we have backups, which actually take up about 1TB per person, mostly because that’s the space I’ve allocated for each, so history just grows until it’s filled. Photos ideally won’t change much. We backup originals along with XMP metadata for edits, so the photos stays the same, and changes are described in easily compressed text files. Backups of course also have deduplication enabled.
My mother, now in her 70s, has about 4tb worth of photos and videos, and we haven't even started digitizing stuff.
My friend, in her mid 20s, uses nearly 3tb of apple cloud space with photos and videos, mostly of her kids and dog.
I dont even film much but im using about a terabyte.
You are moving the goalposts and supporting your generic point only under very narrow assumed conditions.
There’s always a “right tool for the job”. Sometimes it’s the cloud. Sometimes it isn’t. The article is for people who found the cloud isn’t a good fit and need something at home.
A lot of people have large collections of music or movies. Or want to keep full control over some data no matter the cost. Or need it to work without internet. There are many solid reasons to avoid the cloud and use your own solution.
You are arguing that your original assertion isn’t wrong, people’s stated needs must be wrong. Because you have different needs so others must be doing it wrong. And this undermines everything else you say.
Five comments up you're talking about 4x4TB NAS setups. Which is it?
> do you need more space than 2TB ?
Yes.
> (excluding things you've downloaded from the internet)
Why on earth would I do that? My storage includes things I downloaded from the internet that are not there anymore/hard to find/now paywalled. If you were thinking the only thing to download from the internet is pirated media - I haven't included that in my >2TB assessment.
Author here, I completely agree. In fact, I even wrote about it: https://ergaster.org/posts/2023/08/09-i-dont-want-to-host-se...
My homelab is my hobby. I maintain it for my pleasure and to learn new skills. We have an infra nerds club with a few colleagues and we're having a lot of fun comparing our approaches!
As a Northern European (Finland) I can tell you that the electricity cost here for last year was closer to 0,1 € per kWh including the transfer fee and taxes. Additionally, more than 40 % of houses here have electric heating. The heating season starts in early autumn and ends in late spring, lasting 8 to 9 months depending on the year. As the electricity used by the device is turned into heat, during the heating season running it costs nothing.
Yeah, northern scandinavia has plenty of renewable energy.
As for electric heating, that is true in 1:1 heating scenarios, but i assume you guys are also using heat pumps these days, and while you still get heat "for free", it will not be anywhere as efficient as your heat pump.
Yes, it's probably peanuts in the grand scheme of things, i know our air to water heat pump in Denmark uses around 4500-5500 kWh per year, so adding another 100 kWh probably won't mean much.
a NAS has like 10000x more storage than google drive and is also way faster locally.
The premium plan from Google has 2 TB and costs about the same annually as the electricity for the NAS that the GP comment suggested for comparison (at 100% usage). So at the same ONGOING cost (not even counting initial investment), the NAS has 8 times more storage. 16 times if you assume it will be mostly idle. Except if you want high availability with RAID, then you're back to 8 times. And haven't yet thought about backups.
All this assuming that you even need that much storage, which most people definitely do not.
Google cloud has deleted user's data by mistake in the past.
I'm willing to bet that far more data has been lost to people serving their own data, than Google has lost data.
In any case, you should always make backups regardless of where your data is stored. At home, your biggest threat is loss of data, probably through hardware malfunction, house fires or similar.
In the cloud your biggest threat is not loss of data but loss of access to data. Different scenarios but identical outcomes.
Backup solves both scenarios, RAID doesn't solve any of them, but sadly, many people think "oh but I've got RAID6 so surely I cannot lose data".
Having experienced batches of faulty HDDs in a home NAS, you can definitely lose data with RAID6/ZFS-2 even.
Of course, syncing a NAS between yourself and a friend or family member's home may be the better solution over cloud options.
How much space do you realistically need high availability, redundant storage for ?
For my personal use case, that involves photos and documents, all things i cannot easily recreate (photos less so). Those are what matters to me, and storing them in the cloud means i not only get redundancy in a single data center, but also geographical redundancy as many cloud providers will use erasure coding to make your data available across multiple data centers.
Everything else will be just fine on a single drive, even a USB drive, as everything that originated on the internet can most likely be found there again. This is especially true for media (purchased, or naval aquisition). Media is probably the most replicated data on the planet, possibly only behind the bible and IKEA catalog.
So, back to the important data, i can easily fit an entire family of 4 into a single 2TB data plan. That costs me somewhere around €85 - €100 per year, for 4 people, and it works no matter what i do. I no longer need to drag a laptop with me on vacation, and i can basically just say "fuck it" and go on vacation for 2 weeks.
> everything that originated on the internet can most likely be found there again
I would that this were true. I guess it depends on what you mean by "the internet", but there's a reason the Internet Archive exists. Sure, you don't need to back up your recent Firefox installer or your Debian ISO but lots of important and valuable data can't be found on the internet anymore. There are very valid reasons that groups like Archiveteam [1] do what they do, not to mention recent headlines like individuals losing access to their entire cloud storage [2].
[1] https://wiki.archiveteam.org/index.php/Main_Page [2] https://www.theregister.com/2025/08/06/aws_wipes_ten_years/
I just need you to make comparisons that are fair.
I thought they were.
If you need to commute to work daily, and you're concerned about the cost, you don't really care if you're comparing a city car vs a sports car vs the bus, despite on goes at 80km/h, and another can do 230km/h, if all you're interested in is the price.
Obviously as your storage needs increase, so will cloud costs, but unless you're a professional photographer, I'm guessing 2TB will be more than enough for most people.
Again, not talking about people trying to run their own media server on pirated content, and saving money that way. In my book that's comparable to saving money by robbing a bank. You're not saving anything, you're breaking the law, and 9 out of 10 times, it's cheaper to steal someone else's bike than it is taking a taxi home.
I'm talking actual storage for data you actually own, and possibly even data you have created yourself. Anything that came from the internet can be found on the internet again, purchased or naval acquisition.
Sorry, you make some good arguments but then mix them up with clueless assertions.
2 TB ought to good for everyone is hilarious. There is so many people I know who would fill 512 GB phone in 1-2 year with photos and videos.
Maybe you do not have use case or situation where larger storage is needed. But it is strange to assume everyone in same bucket.
It’s great to learn on, and if you happen to have a place with free electricity then even more fun :)
It’s also an excuse for me to stay in most summer days.
The price and effort is practically irrelevant. My homelab is mine, local, and answers only to myself and a wall outlet. Also, where I live, the internet is simply not dependable enough to consider otherwise.
OP has admittedly over-engineered their setup. Depending on your goals (cost, speed, space, autonomy), there are less-rigorous solutions for the layperson.
I, for one, don't want to have Google, etc. as a dependency[1], so I will pay some energy cost to do that.
1: see: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
> Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you.
Agreed, but it doesn't have to take time from your family. I'm on a small team that self-hosts internal services to lower costs/risks. It takes very little time to maintain, and maintenance windows happen on our terms. Our uptime this year is better than "Github Actions", the latency is incredible, and we've had no known security issues.
There are two keys to doing this successfully: (1) don't deploy anything you don't understand (so it won't take you long to fix), and (2) even then, aggressively avoid complexity (so it doesn't break in the first place.)
For example, despite significant network expertise, we stuck to a basic COTS router and a simple IPv4 subnet for our servers. And the services we run are typically self-contained golang binaries that we can deploy with bash onto baremetal. No docker, kvm, ansible, or k8s.
This DIY setup saves us considerably more than it costs. Not for everyone, but with proper scoping, many readers of hacker news could pull this off without losing time with their loved ones.
Its a hobby for devs that work at such a high level of abstraction that they need to tinker with servers to remind them they still exist.
Every homelab I have come across is a hobby project and time sink that is more like a backyard garden or classic car.
>Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you
Who are you to tell people how to spend their time? Let people have hobbies ffs
Sure, run a homelab as a hobby. Everybody has hobbies.
Once your user count goes beyond 1, you suddenly have a SLA as people are dependent on your services. Like it or not, you are now the primary support staff of your local cloud business.
The more users you get, the more time you will need to spend to fix problems.
At which point does it go from a hobby to a 2nd job ?
You're still arguing from the point of view of someone who doesn't want to do it or isn't interested in doing it. Your GP said you 'get' homelabs but it's increasingly clear you do not - and that's ok. People run homelabs because they enjoy learning and tinkering. If they don't enjoy it, or they can't risk having the odd problem, they have other options they can explore. It's not really any more complicated than that. Believe it or not, people are capable of evaluating the tradeoffs and making a sensible decision about what to host themselves.
A server will never love you back.
Neither will the majority of hobbies for self-enrichment.
It isn’t unreasonable to want some alone time.
Neither will a lizard. People still have them.
Well I'm ugly so… at least it doesn't actively hate me either.
Yeah, also neither it wants to move out because you pushed start button too many times or stay sullen over weekend because dinner plan on Friday got canceled.
Stop with this infantilising crap. People can have both rewarding relationships and pastimes. Just because someone likes configuring software does not suggest they neglect their relationships or have delusions that need correction about the value of things in their life.
Unless you run your AI waifu on it.
It's purpose in life is to serve not to love.
Encrypted data on the servers is only useful if your server is just dumb storage. I want the server to actually do something, e.g. serving media, running home assistant etc.
> Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you.
To some, spending time hunched over servers is doing things they love.
I mean, each and every thing you said about maintaining a home lab you can also say about maintaining cloud infrastructure.
There was a time when having hobbies was normal. It seems nowadays some people mistake hobbies for work after hours? Where is that hacker mindset?
lol.
don't spend your time cooking food, pay for others to prepare it for you.
don't spend your time maintainig a house, rent and let someone else do the maintenance.
just lease a car and get a new one automatically every 3 years.
honestly, everyone has their own setpoint for things. and there are degrees of solutions for every point you make.
I think most people would benefit from being just a little bit self-sovereign.
Personally, "majority of people" could use one low power fanless server with 1tb for the few things most people need continuously online.
And separately a server you turn on occasionally with lots of storage, like maybe
https://www.amazon.com/dp/B09TV1XPDD
I'm reminded of jwz's backups info www.jwz.org/doc/backups.html
"RAID is a goddamn waste of your time and money"
Seems like a lot of adults yearn for having a mommy and daddy take care of everything for them
Alternately, most computer vendors actively interfere with your independence and force you into the cloud in various ways with your computers and phones.
> But my server could be shut down because of a power outage or another reason. I might be at work or even on holidays when it happens, and even wireguard can’t solve this.
A 'power outage' incident doesn't seem to have been mitigated. My homelab has had evolving mitigations: I cut a hole in the side of a small UPS so I could connect it to a larger (car) battery for longer uptime, which got replaced by a dedicated inverter/charger/transfer-switch attached to a big-ass AGM caravan battery (which on a couple of occasions powered through two-to-three hour power outages), and has now been replaced with these recent LiFePo4 battery power station thingies.
Of course, it's only a homelab, there's nothing critically important that I'm hosting, but that's not the dang point, I want to beat most of "the things", and I don't like having to check that everything has rebooted properly after a minor power fluctuation (I have a few things that mount remote file stores and these mounts usually fail upon boot due to the speed at which certain devices boot up - and I've decided not to solve that yet).
For anyone thinking of doing this, please please don't. A car battery is probably never a sealed deep cycle battery, and the UPS's charging circuitry is not intended to charge a battery of this size (this is assuming you're using a lead based battery, and not something even more crazy and dangerous like Li-Po or LiFePO4). God forbid you have a cell fail on a car battery and that charger starts cooking the battery. I've had actual car lead acid batteries explode because of poor choices someone else made trying to do something like this, and man when they go, they're dangerous and scary. You really need to pick hardware that's all properly specced and sized for the job...there's a reason Eaton and APC charge what they do.
The better approach (if you have EE skills) is to build your own UPS with LiFePO4 batteries, a proper BMS, and a bunch of USB-C PD ports. Skip the lossy inverter entirely and pick equipment that runs on USB-C PD directly.
I don't know why nobody sells these as COTS yet.
Or buy equipment targeting alirack compatibility, i.e., with 240V PSUs that are minimal effort designed to also specifically work on the appropriate DC voltage so that Alibaba, the alirack standard's originator, could delete the inverter from their UPS and feed PV MPPTs directly into the batteries.
I agree entirely, and wouldn't do it again.
To each their own. I'd personally sleep far more soundly with even a car battery UPS under my bed than with one of those consumer ready lithium ion portable power station batteries they sell on Amazon.
But if you can't explain the difference between voltage and current, or know what "short circuit" means, then this isn't something to poke at.
> I have a few things that mount remote file stores and these mounts usually fail upon boot due to the speed at which certain devices boot up - and I've decided not to solve that yet
If your OS is using systemd, you can fix that pretty easily by adding an After=network-online.target (so the ExecStart doesn't even try to check if there is no networking yet) and an ExecCondition shell script [1] to actually check if nfs / smb on the target host is alive as an override to the fs mounts.
Add a bunch of BindsTo overrides to the mounts and the services that need the data, and you have yourself a way to stop the services automatically when the filesystem goes away.
I've long been in the systemd hater camp, but honestly, not having to wrangle with once-a-minute cronjobs to check for issues is actually worth it.
[1] https://forum.manjaro.org/t/for-those-who-use-systemd-servic...
Here's a deeper article on ordering things around network startup: https://systemd.io/NETWORK_ONLINE/
It doesn't conflict with anything you've said, just a very handy document.
You can also use RequiresMountsFor to mark a mount (directory) as a hard dependency for the unit.
You can also use _netdev in the mount options, then systemd mount generator will generate the dependency on network automatically.
Even then, that doesn't resolve the power outage primarily because the local nodes that your Modem transmits to will also be down in the area from same said power outage, most only have backup power for 10-30m, if even that many say there is no backup power for those which is why they disclaim that in their service agreement with regards to emergency phone calling over voip (for services that include unified communications).
So even if your local node could transmit, none of the others could, and they can't buffer either.
To mitigate power outage, you would need both power, and a cellular connection, and that connection would only be good for 2-3 hours (Cell tower backups), and those would require a Cradlepoint.
Author here, indeed I didn't install a UPS. I've tried to keep my setup fairly minimal, and I'm consciously accepting that if there's a power outage my services will be down. I self-host exclusively for myself, not for others.
What I don't want though is a power outage putting my server offline while I'm on holidays, and not be able to access my services at all.
My ISP-provided router supports Wireguard, so I can use that to connect to my KVM and send the Wake on LAN packages.
Out of curiosity, did you look through the BIOS options on your computer? Mine is much less capable than yours (it is a used mini-pc) but it has options to boot itself up upon resuming power.
I use UPS for my internet and then remote access such as intel AMT will get you back into your systems if you've specced your hardware to have such features.
You REALLY should not expose AMT to the internet.
of course not, but you would vpn in, your router (or server behind your router but that may be more risky being able to get in if something fails) should run a vpn if you plan to use something like AMT.
I just use a small UPS to make sure all data is written to the drives properly before the battery runs out.
Do you have power outages often? Even if I have one, my services can come up automatically without doing anything, when the power is restored.
Same. I have two small UPS. The first I connect to my computers and it lasts about 15 minutes. Enough for me to save/checkpoint whatever in a restorable way. The second I connect to the wifi router. This lasts for a while longer and it's pretty useful. In those 15 minutes local network still works. And if the power comes back in a minute or two (which happens more often than outages that last hours where I am), then I don't have to wait for the wifi routers painfully slow boot time.
Speaking of... does anyone know how to speed up wifi router boot time? Stupid thing takes 5 minutes almost.
> Speaking of... does anyone know how to speed up wifi router boot time? Stupid thing takes 5 minutes almost.
This is probably due to the access point having minimal hardware for the task, and it's startup not being particularly well optimised, so "buy a better AP/router" use likely the most practical answer.
As an alternative, you could buy a small device (perhaps a recent rPi model) with more umph (or add this task to an existing machine in your home lab setup), give it a wireless NIC if it doesn't already have one, and run hostapd to turn it into an AP. That might startup a lot faster.
> Speaking of... does anyone know how to speed up wifi router boot time? Stupid thing takes 5 minutes almost.
Maybe try using OpenWRT if your router hardware is supported
I would be happy with smaller/lighter UPS that will only provide 10 seconds of juice. I only purchased the thing because the lights in my apartment would flicker a couple of times a week.
I’ve spend a decent amount on EcoFlow units (Delta 2) that I use as online UPS for my servers and networking. They work great and I also recently installed dual 220watt EcoFlow solar panels on my roof that pump in solar during the day. Works nicely though the ROI admittedly is not there at all, just a cool thing.
> just a cool thing
I think that says it all. It's gone beyond practicality for me, and I'm OK with that. I'm also satisfied with the current setup; I don't need to spend more.
I have a couple of Ecoflow's and Bluetti, and a Segway LFP battery. They all work fine so far.
> I cut a hole in the side of a small UPS so I could connect it to a larger (car) battery for longer uptime
Can you share more about this? I have a APC Back UPS PRO USV 1500VA (BR1500G-GR) and it would be nice to know if this is possible with that one as well.
That UPS eventually died, and I'm not sure if it was because it was hooked up to a larger battery than it was designed for, but it's still only 12 volts so I don't think the electronics would notice. What they may notice is extended run-time in the event of a power failure.
It was a crude mod. Take the cover off and remove the existing little security alarm battery, use tin snips to cut a hole in the side of the metal UPS cover (this was challenging, it was relatively thick metal, I'd recommend using an angle grinder in an appropriately safe environment far away from the internals of the UPS), and feed the battery cables out through the hole. I probably got some additional cables with appropriately sized terminations to effectively extend the short existing ones (since they were only designed to be used within the device). And then connect it up to a car battery.
Cover any exposed metal on the connectors with that shrink rubber tubing or electrical tape. Be very careful with exposed metal around it anywhere, especially touching the RED POSITIVE pole of the battery. Get a battery box - I got one for the big-ass AGM battery.
Test it out on a laptop that's had it's battery removed or disconnected that, just in case, you don't care too much about losing.
Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
Personally, I think it's safer a less hassle to go for a LiFePo4 (LFP) Power Station style device that has UPS capabilities. LFP batteries have 3,000-ish cycle lifetimes, which could be nearly ten years with daily use.
> use tin snips to cut a hole in the side of the metal UPS cover (this was challenging, it was relatively thick metal, I'd recommend using an angle grinder in an appropriately safe environment far away from the internals of the UPS)
Why not just drill a hole? Drill bits large enough to drill a hole for 120A cables exist.
> Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
If you're going this route I'd recommend a marine battery. Car batteries don't handle deep cycles well, and, TBH, UPS chargers aren't designed for failed car batteries (nor marine batteries) and can possibly cause an explosion in the lead-acid battery has a few dead cells.
No, don't do it. I understand his thought process because they are both 12v batteries with more capacity, but car batteries are made for high burst of energy which a car engine ignition requires, whereas UPS batteries are made for slow drains. Also, these UPS are made for charging battery cells in a certain way, if you start to stack a bank of batteries of the same model in parallel hoping for more capacity, even then its a problem for the UPS's charger, they won't charge evenly and eventually becoming a problem.
Marine deep cycle batteries might work better, but at some point I'm pretty sure lithium would be price competitive.
I like to keep my hardware competence sufficiently low so that I’m never cursed with the false confidence to even consider “drilling a hole in a UPS,” nevermind wiring it to a car battery in my closet…
You seem like the kind of guy who doesn't enjoy a nice sulfuric acid spill on the floor, haha
I will mess with all kinds of hardware, especially mini PCs and routers.. I once had a few hundred iPhones in my closet… but I draw the line at anything that uses batteries or electricity in a non-standard way. If the wire can’t carry data, I’m not touching it.
Maybe it’s because when I was a kid, I fancied myself an experimenter, and I had a wire ripped off a lamp, and touched the two ends together…
It isn't quite that bad. the batteries are close enough that it will work.
the real worry is these are already a fire hazzard and so something goes wrong insurance will blame the mod even if not at fault
It's a bit trickier than you think. And can be dangerous.
The discharging circuitry is fine, but the _charger_ might overheat because a larger battery can draw more current while charging for longer periods. I discovered that when I tried to attach a "lead-acid compatible" LFP battery to an UPS.
These days, it's just easier to buy a dedicated rack-mountable LFP battery originally meant for solar installations, an inverter/charger controller, and a rectifier. The rectifier output will serve as a "solar panel" input for the battery. You get a double-conversion UPS with days-long holdover time for a fraction of a lead-acid UPS.
This really doesn't seem like something one would want to mess around with if they don't know what they're doing (fire hazards and all...)
My commercial UPS already scares me at the fire potential. No way I would take on the risk of some DIY on something that could burn down the place or electrocute me.
There's not much to it, you just take the small 12V sealed lead-acid cell out from the bottom of the UPS, extend the two leads, and connect a larger capacity lead-acid battery of the same voltage.
If you don't recognize the terms "sealed", "lead-acid", "battery", "capacity", or "voltage" then you shouldn't do this.
About the only advantage of it is that it's cheap (free if you find a UPS in the trash with an already dead battery), but those cheap UPSs make really crap quality power, and for some of them the only reason they don't overheat is because their stock battery is so small. It's a bit like how you can cook a whole turkey in the microwave, but you probably don't want to.
I really don't get why people like the Minisforum stuff over alternatives. I've unfortunately been given one, and honestly I'm really unimpressed between crap firmware, no real expandability and just all of the other compromises that come with buying Aliexpress hardware. For the same money you can either pick up a used entry model Dell/HP/Lenovo server (and if they're E3/W/other entry level Xeon, they're usually not terrible on power) or get a good ATX chassis and power it with some off lease Supermicro hardware. Then you don't need to compromise on things like OOB management, hot swap bays, a real SAS card, real 10G nics, ECC ram, etc. etc. Maybe people are just afraid of doing a little bit of putting hardware together? I've seen and have systems that use the above gear that have been going for well over a decade now, with basically no hiccups, and even the old Sandy Bridge era E3 stuff both punches above probably even RPi5 and N100 and doesn't draw more than 30-40W when you don't have spinning disks in there. I'm sure if you avoid AMD and go find a newer T variant Intel chip, you can both have your cake and eat it.
N100 is faster and more efficienct than any Ivy bridge E3. At idle the Xeon draws roughly 20W more, which works out to $30USD/year at the national average electricity prices. That gap widens as the load increases.
I can totally see why someone who doesnt need expandability would choose the cheap mini PC.
When I first got into homelabbing as a hobby, I built a massively overpowered server because I was highly ambitious.it mostly just drew power for projects that didn’t require all the horsepower.
A decade later, I like NUCs and Pis and the like because they’re tiny,low-power, and easy to hide. Then again, I don’t have nearly as much time and drive for offhand projects as I get older, so who knows what a younger me would have decided with the contemporary landscape of hardware available to us today.
A decently powerful Server is nice, when you need it. Having some modern APU for decent en- and decoding performance is great.
There are tasks that benefit from speed, but the most important thing is good idle performance. I don't want the noise, heat or electricity costs.
I'm reluctant to put a dedicated GPU into mine, because it would almost double the idle power consumption for something I would rarely use.
Even my old GTX 970 can throttle down to like 10W while still being able to display and iirc also h.264 decode 1080p60, let alone putting it in a mode that at all matches S3/suspend-to-ram via PCIe sleep states. I'm pretty sure laptop with extra dGPUs normalized aggressive sleep of the power gating kind for their GPUs to keep their impact on battery life negligible (beyond their weight otherwise being used for more battery) until you turn on an application that you set to run on the dGPU.
I just purchased a Minisforum mobo BD795i SE with a Ryzen 9 7945HX (16 core, 32 thread). Can’t beat the price to performance. Building a NAS / VM server with 5x 14TB Seagate Exos drives, 2TB NVME, 500GB boot SSD, and 96GB of DDR5 memory. I was able to buy all components including a 3u hotswap 5x drive caddy for less than $1,200 all in. Can’t really beat that.
For appliance-like quickly replaceable little servers like my firewall or other one off roles, they are ok for me, but to run my TrueNAS system (ZFS) I gotta run something with a Supermicro board and ECC. Mission critical workloads that need 24/7 uptime (homelab general purpose always-on server)!
I am running TrueNAS on it, honestly ECC is overblown out of proportion. This isn’t storing military state secrets.
I used to really like the minis but I had to basically e-waste two of them because the ethernet went bad (lightning strike i think) and there was really no way to replace them and the OS would crash from hardware issues from it.
FWIW if this keeps happening to you, you can get ethernet surge protectors. Or use a couple cheap media converters from copper to a fiber back to copper.
I recommend using optical networking if you are confident about the lightning strike Ethernet damage.
ECC capable hardware tend to be very power hungry.
That's just an artifact of Intel disabling ECC on consumer processors.
There's no reason for ECC to have significantly higher power consumption. It's just an additional memory chip per stick and a tiny bit of additional logic on CPU side to calculate ECC.
If power consumption is the target, ECC is not a problem. I know firsthand that even old Xeon D servers can hit 25W full system idle. On AMD side 4850G has 8 cores and can hit sub 25W full system idle as well.
My HP 800 mini idles at 3W
Not always, HP Microserver n54l had support.
If anyone is looking to get started with a homelab at a good price, I can highly recommend checking ebay for a Dell Wyse 5070. They flooded the market for $50 and are likely powerful enough for many needs. They have a M.2 slot that support SATA. The 'extended' version also has space for a small pcie card and has a parallel and 2 serial ports for a blast to the past.
I built mine around an N150 board off of aliexpress. 6 SATA slots, 4x2.5G ethernet, 2x m.2 slots. Find a cheap second-hand case, a bit of RAM and you're ready to go. It's got hardware transcoding, handles 4K without breaking a sweat. And it consumes 6W!
I'm more inclined to go with N305/N355 myself for the extra compute (more images/containers). But they're a pretty decent option. I setup a "forbidden router" at a friend's using one. Been working great for his home use... proxmox, opnsense for routing, wireguard, pihole, docker-vm running his AP control software, and a trunas scale VM serving a USB hard drive for home backups.
At home, I'm using a 5900H based mini-pc I bought a few years ago and a synology nas.
For a bit more money, Optiplex Micro / Lenovo Tiny / HP Mini series with at least 8th gen i5 are a good option too. Can be found from Ebay for about 70 - 120 USD, much more powerful than Wyse 5070 while still quite power efficient (about 10W idle, as opposed to 5W of Wyse). Usually they come with one NVME, one SATA 2.5" slot, some premium models even with PCIE.
All good options, just noting that based on some searches I made they all seem to lack serial ports compared to the Wyse if that is something you care about (I personally do). There could be variants out there though with serial ports and if would be happy to hear about them and even more happy if there are fanless variants/alternatives for those of us with very limited space at home and a need to avoid noise.
I don't know about fanless, especially with an i5 as opposed to n-something.
But not all those minis are the same. G4 (intel 8th gen) and G5 (intel 9th gen) HPs are horrendous. The fan makes an extremely aggravating noise, and I haven't found a way to fix it. Bonus points for both the fan and heatsink having custom mounts, so even if you wanted to have an ugly but quiet machine by slapping a standard cooler, you couldn't.
G6 versions (intel 10th gen) seem to have fixed this, and they're mostly inaudible on a desk, unless you're compiling something for half an hour.
My Lenovo m910q Tiny has serial ports. Two of them in fact. Cost me $50 on eBay.
What if power consumption is taken into account? Are there any devices in that category that are ok to leave on 24/7 ?
I have a somewhat bigger machine that hosts my homelab, an HP 800 G2 SFF. It takes "normal" components, so can ben modified. The only custom thing is the PSU, but the standard one is good enough for my needs. Bonus points for not requiring an external power adaptor.
It has an i5-6500, 32 GB RAM (16 + 2x8 DIMMs), 2 SATA SSDs and a 2x10Gb Connect-X3. It runs 24/7 hosting OpnSense and HomeAssistant on top of KVM (Arch Linux Hardened – didn't do anything specific to lower the power draw). Sometimes other stuff, but not right now.
I haven't measured it with this specific nic, but before it had a 4x1Gb i350. With all ports up, all VMs running but not doing much, some power meter I got off Amazon said it pulled a little over 14W. The peak was around 40 when booting up.
Electricity costs 0.22 €/kWh here. The machine itself cost me 0 (they were going to throw it out at work), 35 for the nic and maybe 50 for the RAM. It would take multiple years to break even by buying one of these small machines. My plan is to wait out until they start having 10 Gb nics and this machine won't be able to keep up anymore.
Quick search online tells me ~5W for the Dell Wyse 5070, which does not sound unrealistic as I have similar boxes that draw ~10W. So, 32 to 62kWh per year and then we have ~USD 6.5 to 13 per year assuming 20 cents per kWh which another online search told me was reasonably realistic for the US.
Tangent, but it's always crazy to hear what other countries pay per kWh compared to the 0.4€/kWh in Germany.
Yeah, and Germany is expensive compared to the Nordics. 6.35 c/kWh right now in Finland, 2.54 c/kWh average over the last 30 days.
(clarification: that's euro cent, so 0.0635€ etc)
Bay area, California: $0.61 base, $0.80 from 16:00 to 21:00.
Don't worry, there are some places in the USA that are even worse than that, like san diego, san francisco and hawaii
And in Iceland the average is around 0.07€/kWh.
Yea, I own Wyse 5070 extended, and measured around 5W from the wall when nothing attached to the PCIE slot.
Another cheap option are Fujitsu Futro's. They're meant to be thin server clients meant to operate larger server racks (I guess?). Anyway, they come with 4-8GB RAM, an SSD (most people upgrade them), and even have a PCI-e slot (depending on the model) to use with a 2.5 or 10 Gbit ethernet card, for example. Around $50 on ebay.
My eyes widened when I read $600 for a mini PC. Got my 7th gen OptiPlex for $45 and yet to use its full potential.
curious to know what you would use this for?
local dns, static site hosting, local apt cache, various other network services (unifi controller if you've got those APs for example), remote/headless dev machine (maybe not for kernel or bigcorp java development), or whatever else you want. mail if you want. Anything :)
Those little thin clients aren't gonna be fast doing "big" things, but serving up a few dns packets or whatever to your local network is easy work and pretty useful.
I use it for media hosting. Backups (connected USB disks), home assistant, syncthing
Even these low-power CPU's are surprisingly capable. As an example of more fancy thing, one could slap in some external storage, install Jellyfin, and run their own local streaming service off such a machine. The CPU is modern enough for efficient hardware transcoding of a stream.
i bought dell wyse 5070 for building talos cluster using proxmox. pretty great and you can upgrade ram to 32gb
The published technical specs indicate that the maximum RAM is 16gb (2x 8gb).
https://www.delltechnologies.com/asset/pl-pl/products/thin-c...
Have you tried it with 32gb? If so, was this 2x 16gb or 1x 32gb?
I had to do a bios upgrade using Windows but a single 32gb Corsair "DDR4 SODIMM for 11th Generation Intel Core Processors" worked
I have run 2x 16GB in my 5070s with the stock bios.
I would say raspberry pi 5. cheap, small and widely used so much of the stuff is already done many times
Looking at Raspberry Pi prices inside EU, I can get 8 core laptops for a cheaper price, with display, dGPU et al.
No idea what happened, but Raspberry Pis are super expensive for the last couple years, which is why I decided to just go with used Intel NUCs instead. They cost around 80-150EUR and they use more electricity but they are a quite good bang for the buck, and some variants also have 3x HDMI or Gbit/s ethernet or m2 slots you can use to have a SATA RAID in them.
Same.. switched over during the pandemic when full on N95/100 systems were cheaper than just the RPi board by itself. More compute/ram, faster storage, included case and power, fewer headaches.
Is it better than a n100 setup from China? When you factor in the storage, power supply, case, (fan), and so on?
No. The main pain point with RPi's is that they're SD card based – which are slow and prone to failure. Configuring an SSD to be used as the main storage has also been a pain in the past (not sure if that's changed recently).
With an n100, you get a better, more upgradable system for around the same price and same power usage. On top, you will also have an x64 system that isn't limited to some ARM quirks. I made the switch n100's over a year ago and have had no issues with them so far.
Another tip: second hand gaming PCs! They can be incredibly cheap and powerful due to upgrade cycles just make sure to put a raid 1 on it as second hand gamer gear might be less reliable.
the power usage is usually horrible, which is why most don't want it
nah it basically costs nothing in contemporary power use. I metered my mid gaming pc that acts as living room media server and it's 8$/mo.
This is cool - I have a similar home lab on a Mac Mini [1].
The encryption question is interesting. I don't have disk encryption turned on, because I want the computer to recover from power failure. If power turns off then on, the server would be offline until I decrypt it.
How does your "Wake on LAN" work with the encryption?
[1] https://github.com/contraptionco/toolbox
They mention it in the article, in a big yellow note box.
You could use an IP KVM, or you could install Dropbear SSH server into the initramfs.
Author here, indeed I bought a JetKVM. A colleague had one and recommended it warmly! I'm Very happy with it so far, but my usage has been rather basic.
I've heard that it might be difficult to get one in the US though.
https://jetkvm.com/
I've looked into similar solutions since the mini PC I'm using as a home server doesn't support WoL, but never pulled the trigger.
I keep putting it off since it is on a UPS and power outages aren't that frequent. Accessibility isn't too bad since it's under the TV stand.
PiKVM is another great solution.
Building a homelab is an awesome way to learn a lot of things.
I also used to over-engineer my homelab, but I recently took a more simplistic approach (https://www.cyprien.io/posts/homelab/), even though it’s probably still over-engineered for most people.
I realized that I already do too much of this in my day job, so I don’t want to do it at home anymore.
Why install proxmox on top of debian? Proxmox distributes an iso that basically does the same as you with preseeding. Although recently had to install proxmox as an apt pkg on top of debian happened because the proxmox iso wouldn’t install properly. That actually happened twice. I think I’ll just install debian from now on…
I’ve encountered three separate machines where the proxmox installer fails. Both cli and gui. Debian first gets around that. Never figured out why
I have a setup that is perhaps not as robust, but where my primary aim was that I should be able to incrementally encapsulate the parts. https://wiki.roshangeorge.dev/w/One_Quick_Way_To_Host_A_WebA...
As an example, I use cloudflare tunnel to point to an nginx that reverse proxies all the services, but I could just as well point DNS to that nginx and it would still work. I had to rebuild the entire thing on my home server when I found that the cheap VPS I was using was super over-provisioned ($2/mo for 2 Ryzen 7950 cores? Of course it was) and I had this thing at home anyway, and this served me well for that use-case.
When I rebuilt it, I was able to get it running pretty quickly and each piece could be incrementally done: i.e. I could run without cloudflare tunnel and then add it to the mix, I could run without R2 and then switch file storage to R2 because I used FUSE s3fs to mount R2, so on and so forth.
Your VPS provider likely uses servers with ECC RAM, this home server doesn't. For most people it doesn't seem to matter but for me it does - a home server where I store my data needs to have ECC RAM.
Seconded, but hard to find for small boxes. I have seen in-band ECC on Asus Nucs, but that is as good as it gets from what I can tell.
There is the new Minisforum N5 Pro, which supports DDR5 ECC RAM. I'm keeping an eye on it.
I'm not too familiar with it. Why do you want or need ECC RAM for your homelab?
ECC RAM protects against bit flips (a bit changing to the wrong state). These can be caused by electric or magnetic interference. A pixel of a picture suddenly having the wrong color because of a bit flip is not that bad but some day an important file might end up corrupted because of one. I want to sleep well at night not having to worry about silent data corruption so ECC RAM it is. See here for more information: https://en.wikipedia.org/wiki/ECC_memory
totally agree. That's why my homelab storage server(s) are 2nd hand enterprise machines. They come with ECC.
True! Unfortunately, an enterprise machine is more likely to have considerable power draw and quite possibly be much louder. I have a 2013 Apple Mac Pro (trashcan) that uses ECC. They're also cheap, small, and quiet.
This was exactly a use case I had in mind when building https://canine.sh -- also uses k3s as a provider, and provides a Heroku-like devex.
How to actually reliably expose a homelab to the broader internet is a little tricky, cloudflare tunnels mostly does the trick but can only expose one port at a time, so the set up is somewhat annoying
I've got basically raw internet coming in to my OPNSense device, although I had to request certain ports to be removed from the ISP's by-default-blocked policy, since I host a mail server - but the ISP is fine with this, they have a form for it, super easy.
Some family members are behind CGNAT, and I'm not sure if their ISP has the option to move out from behind that, but since they don't self-host it's probably slightly more secure from outside probes. We're still able to privately share communications via my VPN hub to which they connect, which allows me to remotely troubleshoot minor issues.
I haven't looked into cloudflare tunnels, but haven't felt the need.
What do you mean by "one port at a time"?
I run cloudflared on one machine, and it proxies one subdomain to one port, and another to a unix socket (could have been a second port, no pb).
The encryption is interesting but I wouldn’t call this over engineered at all, in fact it’s rather basic compared to a lot of homelabs I see people building particularly where people are doing K8s or similar over multiple machines.
Also Proxmox was called out as the only choice when that is very much not the case. It is a good choice for sure, but there are others.
My home lab:
- Lightning protection
- 2 2U UPSes with 2 extra 2U battery packs each, temperature and humidity monitoring, and remote management
- 1x vSphere 7u3w Enterprise Plus + vCenter, 512 GiB ECC RAM, 16 TiB of RAID10 SSD, 96 thread EPYC server with 4 10 Gb optical NICs and 4x SAS3 ports
- 4U JBOD external NAS 330 TiB usable shared mostly over Samba with Time Machine support
- 2x Ryzen boxes with 128 GiB of ECC RAM, 100 Gb links to each other, 4 TiB RAID1 SSD, also used for distributed builds (also vSphere)
- Additional non-ECC 96 GiB tiny ITX Ryzen Windows lab machine
- Misc. non-ECC 128 GiB micro ITX Ryzen for additional distributed build capacity, currently Fedora w/ Podman and Docker
- Deciso OPNsense (Business license) router with 10 Gb optical ports, WireGuard, NTP-DHCP-DNS
- PoE 4x RPi 5 + SSD Ceph cluster
- Ubiquiti U7 Pro XGS APs
- Eufy security cameras with Home base
- PKI (TLS CA), TOTP 2FA ssh, YK gpg/ssh agent, RustDesk
- All boxes except lab, Fedora, and RPis are lights-out manageable and so don't need a KVM
To Do: NFS 4.2, LDAP, Krb5, TACACS+, SAML/OpenID (authelia), SNMP, Nagios or Grafana/Prometheus, K8s
Cost: $150/month in electricity
I love infrastructure. I run my own services at home too, and have for many years. But to be honest, the older I get the less fun it is to deal with issues that come up from time to time.
At some point you'll need to upgrade hardware and software, you get to do the exercise over again. There will always be lessons learned and its get better each time. Its still work.
"I used the full disk with LLM, and set up disk encryption"
I know we're in the AI hype cycle but I bet you meant LVM there >:-)
Oh lord of the rings, that's embarrassing. Thanks for reporting the typo!
This is such a great post. I have a small collection of posts for inspiration in creating my homelab and this is getting added to it. Current have a Pi 4 with PiHole and a Beelink. Going to add one or two more machines.
After some research, it seems much easier to just back up the Proxmox config (and VM disk images, if they're needed) than to define or deploy Proxmox VMs with OpenTofu or ansible.
https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pm...
I moved my side project to my garage so I don't have to pay hetzner 600+$ and counting every month
I did the exact opposite. And by that I mean physically moved my homelab into their colo earlier this year. Runs like a charm, costs about 500€ per month total.
Sounds like a lot, but I was almost paying the same before - 220€ for power at home, 110€ for a dedicated Hetzner server, 95€ for a secondary internet connection (as not to interfere with the main uplink used for home office by my partner and me).
Not having to deal with the extra heat, noise and used up space at home anymore has been worth it as well.
My storage needs were increasing by the day. Electricity is now a small monthly cost. I have more cores and ram than ever, and can easily expand it. Main machine now runs with 1TB ram and 15TB SSD and other has more than 384G ram. I currently use 3TB ssd storage, and get way more performnce than Hetzner's VMs with ceph ssd disks. I do need redundancy, but it's not something hetzner was giving me anyway, and if my anectode is not a mess up, i actually got database corruption on hetzner that never happened on my own local setup.
I'd have colo'ed or used dedicated as it's definitely better than their VMs, but they don't have that in their US datacenters.
I am pretty happy with my current setup, I have significantly less down time (few mins a month) than when I was on hetzner - but this is mostly due to my need for more ram at times.
I also used this as an excuse to get 56G mellanox fiber switch and get poe cameras etc in a full homelab manner, so it's been fun, on top of being cheaper. Noise is not a concern, I got a sound-proof server rack that's pretty nice. It takes up space, but i have kids, so my garage is near full at times anyways :)
I've got a single server and a /28 IP block on OVH for public facing stuff. Mostly because it's cheaper than the bump from my "home" internet to "business" to be able to use common server ports (blocked on the home service).
Works well enough for what I need.
I use cloudflare tunnel to do most of this, and i am fine with the implications (like CF mitm'ing my website traffic).
Yeah good luck with that one.
I ran a home lab for a number of years. This was a fairly extensive set up - 4 rack mount servers, UPS, ethernet switch etc with LTO backups. Did streaming, email and file storage for the whole family as well as my own experiments.
One morning I woke up to a dead node. The DMZ service node. I found this out because my wife had no internet. It was running the NAT and email too. Some swapping of power supplies later and I found the whole thing was a complete brick. Board gone. It's 07:45 and my wife can't check her schedule and I'm still trying to get 3 kids out of the door.
At that point I realised I'd fucked up by running a home lab. I didn't have the time or redundancy to have anyone rely on me.
I dug the ISP's provided WiFi router out, plugged it in and configured it quickly and got her laptop and phone working on it. Her email was down but she could check calendar etc (on icloud). By the end of the day I'd moved all the family email off to fastmail and fixed everything to talk to the ISP router properly. I spent the next year cleaning up the shit that was on those servers and found out that between us we only had about 300 gig of data worth keeping which was distributed out to individual macbooks and everyone is responsible for backing their own stuff up (time machine makes this easy). Eventually email was moved to icloud as well when domains came along.
I burned 7TB of crap, sold all the kit and never ran a home lab again. Then I realised I didn't have to pay for the energy, the hardware or expend the time running it. There are no total outages and no problems if there's a power failure. The backups are simple, cheap and reliable. I don't even have a NAS now - I just bought everyone some Samsung T7 shield disks.
I have a huge weight off my shoulders and more free time and money. I didn't learn anything I wouldn't have learned at work anyway.
Author here, thanks for the warning! I'm all too familiar with this kind of situation! This homelab is for fun and learning.
I wrote about why I don't (want to) self-host services for others: https://ergaster.org/posts/2023/08/09-i-dont-want-to-host-se...
Yeah everyone who runs a home lab that others use will eventually run into this.
Having a uptime SLA for your "hobby" is a huge pain in the ass and absolutely sucks the fun out of it.
For me it was the constant requests for new media or midnight complaints about jellyfin being down.
If you want to learn infra ops just get a job in the field and get paid for it.
I’ve never had jellyfin + infuse go down, and it’s been a lifesaver when the power’s up and Internet down.
But yeah; things with SLAs are probably better off not self-hosted unless you really enjoy midnight fixes.
ive had issues with it chewing up all available cpu when it encounters corrupted files or a codec it doesnt recognize
Ah, that explains it. I handbrake everything into something that needs no reencoding for infuse to consume.
I can relate to this. I still run my own x86 box as a router to have the AP controller, but I'm strongly considering dropping this.
I need to update it and patch it, hoping nothing goes wrong in the process. If something breaks I'm the only one that can repair it, and I really don't want to hear my wife screaming at me at 7am when I wake up.
Eventually I came to your same conclusion, but I still run a hybrid setup that allows me to keep the router (for now), and a NAS for backup (3-2-1) and some local services. I run a dedicated server from Hetzner for "always on" services, so that the hardware, power redundancy and operational toil are offloaded. I gave up long ago on email: any hosting service will be way better than me doing it - I know I can do it, but is it worth my sanity? Nope.