66 comments

  • thrownawaysz 4 hours ago
    I went down the self host route some years ago but once critical problems hit I realized that beyond a simple NAS it can be a very demanding hobby.

    I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

    Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.

    • zrail 3 hours ago
      My spouse and I work at home and after the first couple multi-day power outages we invested in good UPSs and a whole house standby generator. Now when the power goes out it's down for at most 30 seconds.

      This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.

      • rootusrootus 2 hours ago
        Yeah we did a similar thing. Same situation, spouse and I both work from home, and we got hit by a multiple day power outage due to a rare severe ice storm. So now I have an EV and a transfer switch so I can go for a week without power, and I have a Starlink upstream connection in standby mode that can be activated in minutes.

        Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.

        • VTimofeenko 1 hour ago
          We had a 5 day outage last year, got a generator at the tail end of the windy season and made exact same jokes.

          A year later another atmospheric river hit and we had a 4 hour outage. No more jokes.

          Make sure to run that generator once every few months with some load to keep it happy.

          • rootusrootus 1 hour ago
            Well, it's an EV with a big inverter, not a generator, but I get your point. And I do periodically fire it up and run the house on it for a little while, just to exercise the connection and maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
        • kiddico 1 hour ago
          Thanks for taking one for the team.
    • advael 3 hours ago
      Yea I think my own preference for self-hosting boils down to a distrust of a continuous dependency on a service in control of a company and a desire to minimize such dependencies. While there are FOSS and self-hostable alternatives to tailscale or indeed claude code, using those services themselves simply replaces old dependencies on externally-controlled cloud-based services on new ones
    • digiown 52 minutes ago
      Tailscale has passkey-only account support but requires you to sign up in a roundabout way (first use an SSO, then invite another user, throw away the original). The tailnet lock feature also protects you to some extent, arguably more so than solutions involving self-hosting a coordination server on a public cloud.
    • gessha 52 minutes ago
      Tailscale recently added passkey log in. Would that alleviate the SSO login?

      Tailscale also has a self-hosted version I believe.

    • JamesSwift 3 hours ago
      Well, its not a bottomless pit really. Yes you need a UPS. That’s basically it though.
      • bisby 2 hours ago
        Power outages here tend to last an hour or more. A UPS doesn't last forever, and depending on how much home compute you have, might not last long enough for anything more than a brief outage. A UPS doesn't magically solve things. Maybe you need a home generator to handle extended outages...

        How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.

        I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.

    • CGamesPlay 3 hours ago
      I really enjoy self-hosting on rented compute. It's theoretically easy to migrate to an on-prem setup, but I don't have to deal with the physical responsibilities while it's in the cloud.
      • Gigachad 21 minutes ago
        Depends what you are trying to host. For many people it’s either to keep their private data local, or stuff that has to be on the home network (pi hole / home assistant)

        If you just want to put a service on the internet, a VPS is the way to go.

    • altmanaltman 1 hour ago
      I mean you're right in terms of it being a demanding hobby. The question is, is it worth the switch from other services.

      I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.

    • cyberax 4 hours ago
      Long time ago, it was popular for ISPs offer a small amount of space for personal websites. We might see a resurgence of this, but with cheap VPS. Eventually.
      • SchemaLoad 3 hours ago
        Free static site hosting and cheap VPSs already exist. Self hosting is less about putting sites on the internet now and more about replicating cloud services locally.
        • Imustaskforhelp 3 hours ago
          VPS's are really so dirt cheap that some of them only work because people dont use the servers 100% that they are allocated at or when people dont use the resources they have for most part because of economies of scale but vps's are definitely subsidized.

          Cheap vps servers 1 gb ram and everything can cost around 10-11$ per year and using something like hetzner's cheap as well for around 30$ ish an year or 3$ per month most likely while having some great resilient numbers and everything

          If anything, people self host because they own servers so upgrading becomes easier (but there are vps's which target a niche which people should look at like storage vps, high perf vps, high mem vps etc. which can sometimes provide servers for dirt cheap for your specific use case)

          The other reason I feel like are the ownership aspect of things. I own this server, I can upgrade this server without costing a bank or like I can stack up my investment in a way and one other reason is that with your complete ownership, you don't have to enforce t&c's so much. Want to provide your friends or family vps servers or people on internet themselves? Set up a proxmox or incus server and do it.

          Most vps servers sometimes either outright ban reselling or if they allow, they might sometimes ban your whole account for something that someone else might have done so somethings are at jeopardy if you do this simply because they have to find automated ways of dealing with abuse at scale and some cloud providers are more lenient than others in banning matters. (OVH is relaxed in this area whereas hetzner, for better or for worse, is strict on its enforcement)

          • SchemaLoad 2 hours ago
            Self hosting for me is important because I want to secure the data. I've got my files and photos on there, I want to have the drive encrypted with my key. Not just sitting on a drive I don't have any control over. Also because it plugs in to my smart home devices which requires being on the local network.

            For something like a website I want on the public internet with perfect reliability, a VPS is a much better option.

    • ekianjo 2 hours ago
      > I was in another country when there was a power outage at home.

      If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.

    • newsclues 2 hours ago
      I have a desktop I use but if I had to start again, I’d build a low power r pi or n100 type system that can be powered by a mobile battery backup with solar (flow type with sub 10ms switching and good battery chemistry for long life) that can do the basic homelab tasks. Planning for power outages from the get go rather than assuming unlimited and cheap power
    • Imustaskforhelp 3 hours ago
      Hey, if tailscale is something you are worried about. There are open source alternatives to it as well but I think if your purpose is to just port forward a simple server port, wouldn't ssh in general itself be okay with you.

      You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well

      You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.

      You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything

      But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.

      Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up

      > I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)

      Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what

      • CGamesPlay 2 hours ago
        Thanks for the shout-out! If you have any experiential reports using QTM, I'd love to hear them!
        • Imustaskforhelp 2 hours ago
          Oh yeah this is a really funny story considering what thread we are on, but I remember asking chatgpt or claude or gemini or anything xD to make QTM work and none of them could figure out

          But I think in the end what ended up working was my frustration took over and I just copy pasted the commands from readme and if I remember correctly, they just worked.

          This is really ironical considering on what thread we are on but in the end, Good readme's make self hosting on a home server easier and fun xD

          (I don't exactly remember chatgpt's conversations, perhaps they might have helped a bit or not, but I am 99% sure that it was your readme which ended up helping and chatgpt etc. in fact took an hour or more and genuinely frustrated me from what I remember vaguely)

          I hope QTM reaches more traction. Its build on solid primitives.

          One thing I genuinely want you to perhaps take a look at if possible is creating an additional piece of software or adding the functionality where instead of the careful dance that we have to make it work (like we have to send two large data pieces from two computers, I had to use some hacky solution like piping server or wormhole itself for it)

          So what I am asking is if there could be a possibility that you can make the initial node pairing (ticket?) [Sorry, I forgot the name of primitive] between A and B, you use wormhole itself and now instead of these two having to send large chunks of data between each other, they can now just send 6 words or similar

          Wormhole: https://github.com/magic-wormhole/magic-wormhole

          I even remember building some of my own CLI for something liek this and using chatgpt to build it xD but in the end gave up because I wasn't familiar with the codebase or how to make these two work together but I hope that you can add it. I sincerely hope so.

          Another minor suggestion I feel like giving is to please have asciinema demo. I will create an asciinema patch if you want between two computers but a working demo gif from 0 -> running really really would've helped me save some/few hours

          QTM has lots of potential. Iroh is so sane, it can run directly on top of ipv4 itself and talk directly if possible but it can even break through nats and you can even self host the middle part itself. I had thought about building such a project when I had first discovered QTM and you can just imagine my joy when I discovered QTM from one of your comments a long time ago for what its worth

          Wishing the best of luck of your project! The idea is very fascinating. I would appreciate a visual demo a lot though and I hope we can discuss more!

          Edit: I remember that qtm docs had this issue of where they really felt complex for me personally when all I wanted was one computer port mapped to another computer port and I think what helped in the end was the 4th comment if I remember correctly, I might have used LLM assistance or not or if it helped or not, I genuinely don't remember but it definitely took me an hour or two to figure things out but its okay since I still feel like the software is definitely positive and this might have been a skill issue from my side but I just want if you can add asciinema docs, I can't stress it enough if possible on how much it can genuinely help an average person to figure out the product.

          (Slowly move towards the complex setups with asciinema demos for each of them if you wish)

          Once again good luck! I can't stress qtm and I still strongly urge everyone to try qtm once https://gitlab.com/CGamesPlay/qtm since its highly relevant to the discussion

    • tehlike 3 hours ago
      Starlink backup sounds fun now!
      • thrownawaysz 3 hours ago
        Way too expensive for that imo (but then again might as well just go all in). Probably a 5G connection is more than enough
        • Imustaskforhelp 3 hours ago
          Honestly I think that there must be adapters which can use unlimited 5g sim's data plans as fallback network or perhaps (even primary?)

          They would be cheaper than starlink fwiw and most connections can be robust usually.

          That being said, one can use tailscale or cloudflare tunnels to expose the server even if its behind nat which you mention in your original comment that you might be against at for paranoid reasons and thats completely fine but there are ways to go do that if you want as well which I have talked about it on the other comment I have written here in-depth.

          • numpad0 2 hours ago
            Some SOHO branch office routers like Cisco ISR models can take cellular dongles and/or SIM. Drivers for supported models are baked into ROM and everything works through CLI.
            • Imustaskforhelp 2 hours ago
              man I have this vague memory that I was at a neighbour's house and we were all kids and internet wasn't that widespread (I was really young) and I remember that they had this dongle in which they inserted an sim card in for network access. This is why this idea has always persisted in my head in the first place.

              I don't know what's the name of dongle though, it was similar to those sd card to usb thing ykwim, I'd appreciate it if someone could help find this too if possible

              but also yeah your point is also fascinating as well, y'know another benefit of doing this is that atleast in my area, 5g (500-700mbps) is really cheap (10-15$) with unlimited bandwidth per month and on the ethernet side of things I get 10x less bandwidth (40-80mbps) so much so that me and my brother genuinely thought of this idea

              except that we thought that instead of buying a router like this, we use an old phone device and insert sim in it and access router through that way.

  • simonw 6 hours ago
    This posts lists inexpensive home servers, Tailscale and Claude Code as the big unlocks.

    I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.

    The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.

    Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.

    Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.

    • drnick1 6 hours ago
      I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

      I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.

      • buran77 4 hours ago
        > I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

        Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.

        > I am not sure why people are so afraid of exposing ports

        It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

        > It's the way the Internet is meant to work.

        Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.

        • lmm 4 hours ago
          > It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.

          Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.

          • appplication 3 hours ago
            Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.
            • SchemaLoad 3 hours ago
              Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.
      • zamadatix 4 hours ago
        It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

        There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

        I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.

        • drnick1 4 hours ago
          > There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

          This incident precisely shows that containerization worked as intended and protected the host.

          • zamadatix 3 hours ago
            It protected the host itself but it did not protect the server from being compromised and running malware, mining cryptocurrency.

            Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).

            Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).

        • SoftTalker 4 hours ago
          I just run an SSH server and forward local ports through that as needed. Simple (at least to me).
          • zamadatix 3 hours ago
            I do that as well, along with using sshd as a SOCKS proxy for web based stuff via Firefox, but it can be a bit of a pain to forward each service to each host individually if you have more than a few things going on - especially if you have things trying to use the same port and need to keep track of how you mapped it locally. It can also a lot harder to manage on mobile devices, e.g. say you have some media or home automation services - they won't be as easy to access via a single public SSH host via port forwarding (if at all) as a VPN would be, and wireguard is about as easy a personal VPN as there is.

            That's where wg/Tailscale come in - it's just a traditional IP network at that point. Also less to do to shut up bad login attempts from spam bots and such. I once forgot to configure the log settings on sshd and ended up with GBs of logs in a week.

            The other big upside (outside of not having a 3rd party) in putting in the slightly more effort to do wg/ssh/other personal VPN is the latency+bandwidth to your home services will be better.

          • Rebelgecko 43 minutes ago
            How many random people do you have hitting port 22 on a given day?
            • SoftTalker 5 minutes ago
              Dozens. Maybe hundreds. But they can't get in as they don't have the key.
          • Imustaskforhelp 2 hours ago
            Also to Simon: I am not sure about how Iphone works but in android, you could probably use mosh and termux to then connect to the server as well and have the end result while not relying on third party (in this case tailscale)

            I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.

            Tmate is a wonderful service if you have home networks behind nat's.

            I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source

            Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv

      • heavyset_go 5 hours ago
        > I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

        This is what I do. You can do Tailscale like access using things like Pangolin[0].

        You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.

        > I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.

        This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].

        [0] https://github.com/fosrl/pangolin

        [1] https://news.ycombinator.com/item?id=46136026

        • edoceo 5 hours ago
          Is a container not enough isolation? I do SSH to the host (alt-port) and then services in containers (mail, http)
          • heavyset_go 4 hours ago
            Depends on your risk tolerance.

            I personally wouldn't trust a machine if a container was exploited on it, you don't know if there were any successful container escapes, kernel exploits, etc. Even if they escaped with user permissions, that can fill your box with boobytraps if they have container-granted capabilities.

            I'd just prefer to nuke the VPS entirely and start over than worry if the server and the rest of my services are okay.

            • Imustaskforhelp 2 hours ago
              Yea I feel that too.

              there are some well respected compute providers as well which you can use and for very low amount, you can sort of offload this worry to someone else.

              That being said, VM themselves are good enough security box too. I consider running VM's even on your home server with public facing strategies usually allowable

          • Imustaskforhelp 2 hours ago
            I understand where you are coming from but no, containers aren't enough isolation.

            If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.

            Virtual machines are the intended use case for that. But they can be full of friction at time.

            If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/

            It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.

            I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.

      • alpn 3 hours ago
        > I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.

        I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.

        https://wireplug.org

        • copperx 3 hours ago
          Apparently I'm ignorant about Tailscale, bacause your service description is exactly what I thought Tailscale was.
          • SchemaLoad 3 hours ago
            The main issue people have with Tailscale is that it's a centralised service that isn't self hostable. The Tailscale server manages authentication and keeping track of your devices IPs.

            Your eventual connection is direct to your device, but all the management before that runs on Tailscales server.

            • TOMDM 2 hours ago
              Isn't this what headscale is for?
        • hamandcheese 3 hours ago
          This is very cool!

          But I also think it's worth a mention that for basic "I want to access my home LAN" use cases you don't need P2P, you just need a single public IP to your lan and perhaps dynamic dns.

          • digiown 42 minutes ago
            Where will you host the wg endpoint to open up?

            - Each device? This means setting up many peers on each of your devices

            - Router/central server? That's a single point of failure, and often a performance bottleneck if you're on LAN. If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.

            Not to mention DDNS can create significant downtime.

            Tailscale fails over basically instantly, and is E2EE, unlike the hub setup.

            • hamandcheese 21 minutes ago
              To establish a wg connection, only one node needs a public IP/port.

              > Router/central server? That's a single point of failure

              Your router is a SPOF regardless. If your router goes down you can't reach any nodes on your LAN, Tailscale or otherwise. So what is your point?

              > If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.

              Secure your router. This is HN, not advice for your mom.

              > Not to mention DDNS can create significant downtime.

              Set your DNS ttl correctly and you should experience no more than a minute of downtime whenever your public IP changes.

              • digiown 14 minutes ago
                > one node needs a public IP/port

                A lot of people are behind CGNAT or behind a non-configurable router, which is an abomination.

                > Secure your router

                A typical router cannot be secured against physical access, unlike your servers which can have disk encryption.

                > Your router is a SPOF regardless

                Tailscale will keep your connection over a downstream switch, for example. It will not go through the router if it doesn't have to. If you use it for other usecases like kdeconnect synchronizing clipboard between phone and laptop, that will also stay up independent of your home router.

          • kevin_thibedeau 53 minutes ago
            A public IP and DDNS can be impossible behind CGNAT. A VPN link to a VPS eliminates that problem.
            • hamandcheese 16 minutes ago
              When I said "you just need a single public IP" I figured it was clear that I wasn't claiming this works for people who don't have a public IP.
            • digiown 40 minutes ago
              The VPS (using wg-easy or similar solutions) will be able to decrypt traffic as it has all the keys. I think most people self-hosting are not fine with big cloud eavesdropping on their data.

              Tailscale really is superior here if you use tailnet lock. Everything always stays encrypted, and fails over to their encrypted relays if direct connection is not possible for various reasons.

      • Etheryte 4 hours ago
        Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there. No matter what I'm hosting, it's a lot more convenient to not have to worry about that even for a second.
        • drnick1 4 hours ago
          > Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there

          Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.

          • TheCraiggers 2 hours ago
            Port scanners don't try to ssh into my server with various username/password combinations.

            I prefer to hide my port instead of using F2B for a few reasons.

            1. Log spam. Looking in my audit logs for anything suspicious is horrendous when there's just megs of login attempts for days.

            2. F2B has banned me in the past due to various oopsies on my part. Which is not good when I'm out of town and really need to get into my server.

            3. Zero days may be incredibly rare in ssh, but maybe not so much in Immich or any other relatively new software stack being exposed. I'd prefer not to risk it when simple alternatives exist.

            Besides the above, using Tailscale gives me other options, such as locking down cloud servers (or other devices I may not have hardware control over) so that they can only be connected to, but not out of.

        • NewJazz 2 hours ago
          This is a good reason not to expose random services, but a wireguard endpoint simply won't respond at all if someone hits it with the wrong key. It is better even than key based ssh.
      • digiown 46 minutes ago
        A mesh-type wireguard network is rather annoying to set up if you have more than a few devices, and a hub-type network (on a low powered router) tends to be so slow that it necessitates falling back to alternate interfaces when you're at home. Tailscale does away with all this and always uses direct connections. In principle it is more secure than hosting it on some router without disk encryption (as the keys can be extracted via a physical attack, and a pwned router can also eavesdrop on traffic).
      • epistasis 3 hours ago
        I've managed wireguard in the past, and would never do it again. Generating keys, distributing them, configuring it all...... bleh!

        Never again, it takes too much time and is too painful.

        Certs from Tailscale are reason enough to switch, in my opinion!

        The key with successful self hosting is to make it easy and fast, IMHO.

      • Frotag 4 hours ago
        Speaking of Wireguard, my current topology has all peers talking to a single peer that forwards traffic between peers (for hole punching / peers with dynamic ips).

        But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?

        • Frotag 3 hours ago
          I guess I'm looking for wireguard's version of STUN. And now that I know what to google for, finally found some promising leads.

          https://github.com/jwhited/wgsd

          https://www.jordanwhited.com/posts/wireguard-endpoint-discov...

          https://github.com/tjjh89017/stunmesh-go

        • wooptoo 3 hours ago
          Two separate WG profiles on the phone; one acting as a Proxy (which forwards everything), and one acting just as a regular VPN without forwarding.
        • megous 3 hours ago
          Have your network managing software setup a default route with a lower metric than wireguard default route based on wifi SSID. Can be done easily with systemd-networkd, because you can match .network file configurations on SSID. You're probably out of luck with this approach on network-setup-challenged devices like so called smart phones.
      • sauercrowd 5 hours ago
        People are not full time maintainers of their infra though, that's very different to companies.

        In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.

        • buildfocus 5 hours ago
          Wireguard is _really_ simple in that sense though. If you're not doing anything complicated it's very easy to set up & maintain, and basically just works.

          You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.

      • Topgamer7 5 hours ago
        I don't have a static IP, so tailscale is convenient. And less likely to fail when I really need it, as apposed to trying to deal with dynamic dns.
      • SchemaLoad 4 hours ago
        If you expose ports, literally everything you are hosting and every plugin is an attack surface. Most of this stuff is built by single hobbiest devs on the weekend. You are also exposed to any security issues you make in your configuration. My first attempt self hosting I had redis compromised because I didn't realise I had exposed it to the internet with no password.

        Behind a VPN your only attack surface is the VPN which is generally very well secured.

        • sva_ 4 hours ago
          You exposed your redis publicly? Why?

          Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.

          • SchemaLoad 4 hours ago
            I actually didn't know I had. At the time I didn't properly know how docker networking worked and I exposed redis to the host so my other containers could access it. And then since this was on a VPS with a dedicated IP, this made it exposed to the whole internet.

            I now know better, but there are still a million other pitfalls to fall in to if you are not a full time system admin. So I prefer to just put it all behind a VPN and know that it's safe.

            • drnick1 3 hours ago
              > but there are still a million other pitfalls to fall in to if you are not a full time system admin.

              Pro tip: After you configure a new service, review the output of ss -tulpn. This will tell you what ports are open. You should know exactly what each line represents, especially those that bind on 0.0.0.0 or [::] or other public addresses.

              The pitfall that you mentioned (Docker automatically punching a hole in the firewall for the services that it manages when an interface isn't specified) is discoverable this way.

              • jsrcout 3 hours ago
                Thanks, didn't know about this one.
        • Jach 2 hours ago
          I have a VPS with OVH, I put Tailscale on it and it's pretty cool to be able to install and access local (to the server) services like Prometheus and Grafana without having to expose them through the public net firewall or mess with more apache/nginx reverse proxies. (Same for individual services' /metrics endpoints that are served with a different port.)
      • CSSer 5 hours ago
        The answer is people who don't truly understand the way it works being in charge of others who also don't in different ways. In the best case, there's an under resourced and over leveraged security team issuing overzealous edicts with the desperate hope of avoiding some disaster. When the sample size is one, it's easy to look at it and come to your conclusion.

        In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.

      • esseph 5 hours ago
        With ports you have dozens or hundreds of applications and systems to attack.

        With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.

        With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.

    • MattSayar 47 minutes ago
      Just be sure to run it with --accept-dns=false otherwise you won't have any outbound Internet on your server if you ever get logged out. That was annoying to find out (but easy to debug with Claude!)
    • philips 6 hours ago
      I agree! Before Tailscale I was completely skeptical of self hosting.

      Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!

      • ryandrake 6 hours ago
        Maybe I'm dumb, but I still don't quite understand the value-add of Tailscale over what Wireguard or some other VPN already provides. HN has tried to explain it to me but it just seems like sugar on top of a plain old VPN. Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.
        • Jtsummers 6 hours ago
          I think you answered the question. Sugar. It's easier than managing your own Wireguard connections. Adding a device just means logging into the Tailscale client, no need to distribute information to or from other devices. Get a new phone while traveling because yours was stolen? You can set up Tailscale and be back on your private network in a couple minutes.

          Why did people use Dropbox instead of setting up their own FTP servers? Because it was easier.

        • simonw 5 hours ago
          If you're confident that you know how to securely configure and use Wireguard across multiple devices then great, you probably don't need Tailscale for a home lab.

          Tailscale gives me an app I can install on my iPhone and my Mac and a service I can install on pretty much any Linux device imaginable. I sign into each of those apps once and I'm done.

          The first time I set it up that took less than five minutes from idea to now-my-devices-are-securely-networked.

        • Cyph0n 6 hours ago
          It’s a bit more than sugar.

          1. 1-command (or step) to have a new device join your network. Wireguard configs and interfaces managed on your behalf.

          2. ACLs that allow you to have fine grained control over connectivity. For example, server A should never be able to talk to server B.

          3. NAT is handled completely transparently.

          4. SSO and other niceties.

          For me, (1) and (2) in particular make it a huge value add over managing Wireguard setup, configs, and firewall rules manually.

        • SchemaLoad 4 hours ago
          Tailscale is Wireguard but it automatically sets everything up for you, handles DDNS, can punch through NAT and CGNAT, etc. It's also running a Wireguard server on every device so rather than having a hub server in the LAN, it directly connects to every device. Particularly helpful if it's not just one LAN you are trying to connect to, but you have lots of devices in different areas.
        • zeroxfe 5 hours ago
          > Plex is just sugar on top of file sharing.

          right, like browsers are just sugar on top of curl

          • edoceo 5 hours ago
            curl is just sugar on sockets ;)
            • epistasis 3 hours ago
              SSH is just sugar on top of telnet and running your own encryption algorithms by hand on paper and typing in the results.
        • navigate8310 1 hour ago
          Tailscale is able to punch holes in CGNAT which a vanilla wireguard cannot
        • Frotag 6 hours ago
          I always assumed it was because a lot of ISPs use CGNAT and using tailscale servers for hole punching is (slightly) easier than renting and configuring a VPS.
        • drnick1 5 hours ago
          > Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.

          Speaking of that, I have always preferred a plain Unbound instance and a Samba server over fancier alternatives. I guess I like my setups extremely barebone.

          • ryandrake 5 hours ago
            Yea, my philosophy for self-hosting is "use the smallest amount of software you can in order to do what you really need." So for me, sugar X on top of fundamental functionality Y is always rejected in favor of just configuring Y."
        • atmosx 6 hours ago
          You don’t have to run the control plane and you don’t have to manage DNS & SSL keys for the DNS entries. Additionally the RBAC is pretty easy.

          All these are manageable through other tools, but it’s more complicated stack to keep up.

        • mfcl 6 hours ago
          It's plug and play.
          • Forgeties79 5 hours ago
            And some people may not value that but a lot of people do. It’s part of why Plex has become so popular and fewer people know about Jellyfin. One is turnkey, the other isn’t.

            I could send a one page bullet point list of instructions to people with very modest computer literacy and they would be up and running in under an hour on all of their devices with Plex in and outside of their network. From that point forward it’s basically like having your own Netflix.

        • Skunkleton 6 hours ago
          Yes, that is really all it is.
        • lelandbatey 1 hour ago
          If Plex is "just file sharing" then I guarantee you'd find Tailscale "just WireGuard".

          I enjoy that relative "normies" can depend on it/integrate it without me having to go through annoying bits. I like that it "just works" without requiring loads of annoying networking.

          For example, my aging mother just got a replacement computer and I am able to make it easy to access and remotely administer by just putting Tailscale on it, and have that work seamlessly with my other devices and connections. If one day I want to fully self-host, then I can run Headscale.

    • SchemaLoad 4 hours ago
      Yeah same story for me. I did not trust my sensitive data on random self hosting apps with no real security team. But now I can put the entire server on the local network only and split tunnel VPN from my devices and it just works.

      LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.

    • JamesSwift 3 hours ago
      Just use subpath routing and fail2ban and Im very comfortable with exposing my home setup to the world.

      The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.

    • PaulKeeble 5 hours ago
      Its especially important in the CGNAT world that has been created and the enormous slog that IPv6 rollout has ultimately become.
    • comrade1234 5 hours ago
      I just have a vpn server on my fiber modem/router (edgerouter-4) and use vpn clients on my devices. I actually have two vpn networks - one that can see the rest of my home network (and server) and the other that is completely isolated and can't see anything else and only does routing. No need to use a third-party and I have more flexibility
    • Melatonic 4 hours ago
      Why not cloudflare tunnels ?
    • dangoodmanUT 6 hours ago
      definitely, but to be fair, beyond that it's just linux. Most people would need claude code to get what ever they want to use linux for running reliably (systemd service, etc.)
    • shadowgovt 5 hours ago
      Besides the company that operates it, what is the big difference between Tailscale and Cloudflare tunnels? I've seen Tailscale mentioned frequently but I'm not quite sure what it gets for me. If it's more like a VPN, is it possible to use on an arbitrary device like a library kiosk?
      • ssl-3 4 hours ago
        I don't use Cloudflare tunnels for anything.

        But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.

        Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.

        But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.

      • vachina 3 hours ago
        You can self host a tailscale network entirely on your own, without making a single call to Tailscale Inc.

        Your cloudflare tunnel availability depends on Cloudflare’s mood of the day.

  • valcron1000 1 hour ago
    > When something breaks, I SSH in, ask the agent what is wrong, and fix it. > I am spending time using software, learning

    What are you actually learning?

    PSA: OP is a CEO of an AI company

    • enos_feedler 1 hour ago
      you are learning what it takes to keep a machine up and running. You still witness the breakage. You can still watch the fix. You can review what happened. What you are implying from your question is that compared to doing things without AI, you are learning less (or perhaps you believe nothing). You definitely are learning less about mucking around in linux. But, if the alternative was not ever running a linux machine at all because you didn't want to deal with running it, you are learning infinitely more.
  • fhennig 4 hours ago
    I think it's great that people are getting into self-hosting, but I don't think it's _the_ solution to get us off of big tech.

    Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.

    This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.

  • windex 19 minutes ago
    I had problems with tailscale being flaky about a year ago and it would stop responding taking down networking with it. I've since ripped it out and went with a VPS based wireguard for all PCs and mobiles. Stable since then.
  • dwd 4 hours ago
    Been self-hosting for last 20 years and I would have to say LLMs were good for generating suggestions when debugging an issue I hadn't seen before, or for one I had seen before but was looking for a quicker fix. I've used it to generate bash scripts, firewall regex.

    On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.

    • FaradayRotation 1 minute ago
      ~10 years ago I remember how shocked I was the first time I saw how many people were trying to probe my IP on my home router, from random places all over the globe.

      Years later I still had the same router. Somewhere a long the line, I fired the right neurons and asked myself, "When was the last time $MANUFACTURER published an update for this? It's been awhile..."

      In the context of just starting to learn about the fundamentals of security principles and owning your own data (ty hackernews friends!), that was a major catalyst for me. It kicked me into a self-hosting trajectory. LLMs have saved me a lot of extra bumps and bruises and barked shins in this area. They helped me go in the right direction fast enough.

      Point is, parent comment is right. Be safe out there. Don't let your server be absorbed into the zombie army.

    • MrDarcy 4 hours ago
      The best solution I’ve found for probes is to put all eggs into the basket listening on 443.

      Haproxy with SNI routing was simple and worked well for many years for me.

      Istio installed on a single node Talos VM currently works very well for me.

      Both have sophisticated circuit breaking and ddos protection.

      For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.

      I expose one or two things to the public behind an oauth2-proxy for authnz.

      Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.

      • aaronax 4 hours ago
        And use a wildcard cert so that all your services don't get proved due to cert transparency logs.
    • SchemaLoad 3 hours ago
      These days I just wouldn't put my homeserver exposed to the internet only. LAN only with a VPN. Does mean you can't share links and such with other people, but your server is now very secure and most of the stuff you do on it doesn't need public access anyway.
  • legojoey17 1 hour ago
    I just got around to a fresh NixOS install and I couldn't be happier as I've been able to do practically everything via Codex while keeping things concise and documented (given it's nix, not a bunch of commands of the past).

    I recently had a bunch of breakages and needed to port a setup - I had a complicated k3s container in proxmox setup but needed it in a VM to fix various disk mounts (I hacked on ZFS mounts, and was swapping it all for longhorn)

    As is expected, life happens and I stopped having time for anything so the homelab was out of commission. I probably would still be sitting on my broken lab given a lack of time.

  • Humorist2290 6 hours ago
    Fun. I don't agree that Claude Code is the real unlock, but mostly because I'm comfortable with doing this myself. That said, the spirit of the article is spot on. The accessibility to run _good_ web services has never been better. If you have a modest budget and an interest, that's enough -- the skill gap is closing. That's good news I think.

    But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.

    I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.

    • heavyset_go 5 hours ago
      I believe Vaultwarden keeps data encrypted at rest with your master key, so some of the problems inherent to hosting such data can be mitigated.
      • Humorist2290 5 hours ago
        I can believe this, and it's a good point. I believe Bitwarden does the same. I'm not against Vaultwarden in particular but against colocation of highly sensitive (especially orthogonally sensitive) data in general. It's part of a self-hoster's journey I think: backups, isolation, security, redundancy, energy optimization, etc. are all topics which can easily occupy your free time. When your partner asks whether your photos are more secure in Immich than Google, it can lead to an interesting discussion of nuances.

        That said, I'm not sure if Bitwarden is the answer either. There is certainly some value in obscurity, but I think they have a better infosec budget than I do.

  • visageunknown 1 hour ago
    I find LLMs remove all the fun for me. When I build my homelab, I want the satisfaction of knowing that I did it. And the learning gains that only come from doing it manually. I don't mind using an LLM to shortcut areas that are just pure pain with no reward, but I abstain from using it as much as possible. It gives you the illusion that you've accomplished something.
    • cyberrock 6 minutes ago
      Getting it up and running is fun but I find maintaining some services a pain. For example, Authelia has breaking configuration changes every minor release, and fixing that easily takes 1-X hours every time. I gave up for 4.38 and just tossed the patch notes into NotebookLM.
    • lurking_swe 36 minutes ago
      > It gives you the illusion that you've accomplished something.

      What’s the goal? If the act of _building_ a homelab is the fun then i agree 100%. If _having_ a reliable homelab that the family can enjoy is the goal, then this doesn’t matter.

      For me personally, my focus is on “shipping” something reliable with little fuss. Most of my homelab skills don’t translate to my day job anyway. My homelab has a few docker compose stacks, whereas at work we have an internal platform team that lets me easily deploy a service on K8s. The only overlap here is docker lol. Manually tinkering with ports and firewall rules, using sqlite, backups with rsync, etc…all irrelevant if you’re working with AWS from 9-5.

      I guess I’m just pointing out that some people want to build it and move on.

    • Gigachad 13 minutes ago
      I don’t give them direct access to my computer. I just use them as an alternative to scrolling reddit for answers. Then I take the actions myself.
  • le_meer 39 minutes ago
    Just got a home-server. Immich is awesome! How's Caddy working out though? I need a way to expose immich to public internet (not just a VPN). Something like photos.domain.com

    For now I'm just using Cloudflare tunnels, but ideally I also want to do that myself (without getting DDoS)

    • kilobaud 33 minutes ago
      I am curious what you mean by doing it yourself, i.e., do you mean (as perhaps an oversimplification) having a DNS record pointing at your home IP address? What are you wanting to see as the alternative to a Cloudflare tunnel?
    • digiown 34 minutes ago
      Look up mutual TLS / client authentication. Caddy and Immich supports it. Then you can expose it to the internet reasonably securely.
  • wswin 5 hours ago
    Home NAS servers are already shipped with user friendly GUI. Personally I haven't used them, but I certainly would prefer it, or recommend it to tech-illitarate people instead of allowing LLM to manage the server.
  • tezza 5 hours ago
    Wait… tailscale connection to your own network, and unsupervised sysadmin from an oracle that hallucinates and bases its decisions on blog post aggregates?

    p0wnland. this will have script kiddies rubbing their hands

    • asciii 4 hours ago
      Hope OP has nice neighbors because sharing that password is basically keys to this kingdom
  • dpe82 3 hours ago
    I've recently begun moving the systems I administer to Claude-written NixOS configs. Nix is great but can be a real pain to write yourself; Claude removes the pain.
    • hooo 3 hours ago
      Me too... using that same logic.
      • dpe82 2 hours ago
        Now if only there were a Nix-like system for FreeBSD! :)
  • comrade1234 5 hours ago
    Prices are going to have an effect here. I have a 76TB backup drive of 8 drives. A few months ago one of my 10TB drives failed and I replaced it with a 12 TB WD gold for 269CHF. I was thinking of building a new backup drive (for fun) and so I priced the same drive and now it's 409CHF.

    It's not tariffs (I'm in Switzerland). It's 100% the buildout of data centers for AI.

  • Finbarr 3 hours ago
    I used Codex to set up a raspberry pi as a VPN with WireGuard. I had no similar experience before and it was super easy. I used Claude Code to audit and clean up a 10+ year old AWS account- patching security, shutting down redundant services, simplifying the structure. I want Claude Code to replace every bad UI out there. I know what outcome I want and don’t need to learn all the details to get there.
  • chasd00 4 hours ago
    What I do at home is ubuntu on a cheap small computer I found on ebay. ufw blocks everything except 80, 443, and 22. Setup ssh to not use passwords and ensure nginx+letsencrypt doesn’t run as root. Then, forward 80 and 443 from my home router to the server so it’s reachable from the internet. That’s about it, now I have an internet accessible reverse proxy to surface anything running on that server. The computers on the same LAN (just my laptop basically) have host file entries for the server. My registrar handles DNS for the external side (routers public ip). Ssh’ing to the server requires a lan IP but that’s no big deal I’m at home whenever I’m working on it anyway.
    • dizhn 4 hours ago
      Put wireguard on that thing and don't expose anything on your public IP. Better yet don't have a public IP. Just port forward the wireguard IP from your router. That's it. No firewall no nothing. Not even accidental exposure.
      • drnick1 3 hours ago
        > Put wireguard on that thing and don't expose anything on your public IP. Better yet don't have a public IP.

        This is nonsense. You can't self-host services meant to interact with the public (such as email, websites, Matrix servers, etc.) without a public IP, preferably one that is fixed.

        • tstrimple 2 hours ago
          Sure you can. It’s what cloudflared and services like it are designed for.
          • drnick1 2 hours ago
            Is it still self-hosting though?
            • hooo 1 hour ago
              You just need to keep the DNS record updated.
  • chaz6 5 hours ago
    I would really like some kind of agnostic backup protocol, so I can simply configure my backup endpoint using an environment variable (e.g. `-e BACKUP_ENDPOINT=https://backup.example.com/backup -e BACKUP_IDENTIFIER=xxxxx`), then the application can push a backup on a regular schedule. If I need to restore a backup, I log onto the backup app, select a backup file and generate a one time code which I can enter into the application to retrieve the data. To set up a new application for backups, you would enter a friendly name into the backup application and it would generate a key for use in the application.
    • PaulKeeble 5 hours ago
      At the moment I am docker compose down everything, run the backup of their files and then docker compose up -d again afterwards. This sort of downtime in the middle of the night isn't an issue for home services but its also not an ideal system given most wont be mid writing a file at the time of backup anyway because its the middle of the night! But if I don't do it the one time I need those files I can guarantee it will be corrupted so at the moment don't feel like there are a lot of other options.
    • Waterluvian 5 hours ago
      Maybe apps could offer backup to stdout and then you pipe it. That way each app doesn’t have to reason about how to interact with your target, doesn’t need to be trusted with credentials, and we don’t need a new standard.
    • dangus 5 hours ago
      I use Pika Backup which runs on the BorgBackup protocol for backing up my system’s home directory. I’m not really sure if this is exactly what you’re talking about, though. It just sends backups to network shares.
      • cryostasis 2 hours ago
        I'm actively in the process of setting this up for my devices. What have you done for off-site backups? I know there are Borg specific cloud providers (rsync.net, borgbase, etc.). Or have you done something like rclone to an S3 provider?
        • dangus 2 hours ago
          No off-site backup for me, these items aren’t important enough, it’s more for “oops I broke my computer” or “set my new computer up faster” convenience.

          Anything I really don’t want to lose is in a paid cloud service with a local backup sync over SMB to my TrueNAS box for some of the most important ones.

          An exception is GitHub, I’m not paying for GitHub, but git kinda sorta backs itself up well enough for my purposes just by pulling/pushing code. If I get banned from GitHub or something I have all the local repos.

          • cryostasis 1 hour ago
            Good to know! I have shifted more to self hosting, e.g., Gitea rather than Github, and need to establish proper redundancy. Hopefully Borg Backup, with it's deduplication will be good, at least for on-site backups.
  • shamiln 5 hours ago
    Tailscsle was never the unlock for me, but I guess I never was the typical use case here.

    I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.

    Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).

    I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.

    My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.

    • zrail 4 hours ago
      Curious how long you've been sitting on the IP block. I've been nosing around getting an ASN to mess around with the lower level internet bones but a /24 is just way too expensive these days. Even justifying an ASN is hard, since the minimum cost is $275/year through ARIN.
      • bakies 4 hours ago
        Is that the minimum for an ASN? /24 is a lot of public IP space! I'd expect just to get a static IP from and ISP if I were to coloc like this
        • zrail 3 hours ago
          The minimum publicly routable IPv4 subnet is /24 and IPv6 is /48. IPv6 is effectively free, there are places that will lease a /48 for $8/year, whereas as far as I can tell it's multiple thousands of USD per year to acquire or lease a /24 of IPv4.
  • HarHarVeryFunny 3 hours ago
    Interesting use case for Claude Code, or any similar local executor talking to a remote AI (Gemini suggests that "Hybrid-Local AI Agent" is a generic name for these, although I've never heard it called that before).

    I wonder if a local model might be enough for sysadmin skills, especially if were trained specifically for this ?

    I wonder if iOS has enough hooks available that one could make a very small/simple agentic Siri replacement like this that was able to manage the iPhone at least better than Siri (start and stop apps, control them, install them, configure iPhone, etc) ?

  • jackschultz 6 hours ago
    I literally did this yesterday and had the same thought. Older computer (8 gigs ram) with crappy windows I never used and I thought huh, I wonder how good these models can take me through installing linux with goal of docker deploys of relatively basic things like cron tasks, personal postgres, and minio that I can used for self shared data.

    Took a couple hours with some things I ran across, but the model had me go through the setup for debian, how to go through the setup gui, what to check to make it server only, then it took me through commands to run so it wouldn't stop when I closed the laptop, helped with tailscale, getting the ssh keys all setup. Heck it even suggested doing daily dumps of the database and saving to minio and then removing after that. Also knows about the limitations of 8 gigs of ram and how to make sure docker settings for the difference self services I want to build don't cause issues.

    Give me a month and true strong intention and ability to google and read posts and find the answer on my own and I still don't think I would have gotten to this point with the amount of trust I have in the setup.

    I very much agree with this topic about self hosting coming alive because these models can walk you through everything. Self building and self hosting can really come alive. And in the future when open models are that much better and hardware costs come down (maybe, just guessing of course) we'll be able to also host our own agents on these machines we have setup already. All being able to do it ourselves.

  • danpalmer 5 hours ago
    There's something ironic about using Claud Code – a closed source service, that you can't self-host the hardware for, and that you can't get access to the data for – to self-host so that you can reduce your dependencies on things.
    • SchemaLoad 4 hours ago
      Before you had to rely on blog posts and reddit for information, something you also couldn't self host. And if you are just asking it questions and taking actions yourself, you are learning how it works to do it yourself next time.
      • danpalmer 4 hours ago
        Or you could read man pages, ask people for help, read books... all of which are more closely aligned with self-hosting than outsourcing the whole process.

        I agree you could use LLMs to learn how it works, but given that they explain and do the actions, I suspect the vast majority aren't learning anything. I've helped students who are learning to code, and very often they just copy/paste back and forth and ignore the actual content.

        • SchemaLoad 4 hours ago
          Sure, you could. But this isn't my job, it isn't my career. I just want Nextcloud running on a machine at home. I know linux and docker well enough to validate the ideas coming out of Gemini, and it helps me find stuff much faster than if I had to read man pages or read books.

          And I find the stuff that the average self hoster needs is so surface level that LLMs flawlessly provide solutions.

          • danpalmer 4 hours ago
            My push back isn't really on the possibility, it's on the irony. Self hosting is for many an ideological act that's about reducing dependencies on big tech, removing surveillance, etc. LLMs are essentially the antithesis of this.

            If you're self hosting for other reasons then that's fine. I self host media for various reasons, but I also give all my email/calendar/docs/photos over to a big tech company because I'm not motivated by that aspect.

            • SchemaLoad 3 hours ago
              Kind of but I don't really agree. Before LLMs you were still reliant on online resources, forums, digitalocean blog posts. The server itself also doesn't rely on an LLM. If one goes down, your server will continue functioning. You are also not tied to any particular LLM and can freely switch.

              They also aren't seeing any of your sensitive data being hosted on the server. At least the way I use them is getting suggestions for what software and configs I should go with, and then I do the actual doing. Which means I'm becoming independently more capable than I was before.

    • itchingsphynx 57 minutes ago
      Ahh yes, the irony is not lost on using a paid closed-source service to create and help manage a self-hosted service running FOSS. I thought it was because I didn't want to pay SAAS subscription costs, but now I just need Claude Pro...

      I'm asking Claude technical questions about setup, e.g., read this manual, that I have skimmed but don't necessarily fully understand yet. How do I monitor this service? Oh connect Tailscale and manage with ACLs. But what do I do when it doesn't work or goes down? Ask Claude.

      To get more accurate setup and diagnostics, I need to share config files, firewall rules, IPv6 GUAs, Tailscale ACLs... and Claude just eats it up, and now Anthropic knows it forever too. Sure, CGNET, Wireguard, and ssh logins stand between us, but... Claude is running a terminal window on a LAN device next to another terminal window that does have access to my server. Do I trust VS Code? Anthropic? The FOSS? Is this really self-hosting? Ahh, but I am learning stuff, right?

    • raincole 3 hours ago
      If I google how to host a Wordpress blog are you going to tell me what I am doing is "ironic" because Google is not hosted by me? Even more ironic, Google has a competing product, blogspot! How ironic!
  • cmiles8 6 hours ago
    Anyone seriously about tech should have a homelab. It’s a small capital investment that lasts for years and with proxmox or similar having your own personal “private cloud” on demand is simple.
  • atmosx 6 hours ago
    Just make sure you have a local and remote backup server.

    From to time, test the restore process.

    • yencabulator 43 minutes ago
      Claude with root access will ensure there's "motivation" to run the restore process regularly.
  • benzguo 5 hours ago
    Great post! Totally agree – agents like Claude Code make self-hosting a lot more realistic and low maintenance for the average dev.

    We've gone a step further, and made this even easier with https://zo.computer

    You get a server, and a lot of useful built-in functionality (like the ability to text with your server)

  • recvonline 4 hours ago
    I started the same project end of last year and it’s true - having an LLM guide you through the setup and writing docs is a real game changer!

    I just wish this post wasn’t written by an LLM! I miss the days where you can feel the nerdy joy through words across the internet.

  • JodieBenitez 4 hours ago
    So it's self hosting but with a paid and closed saas dependency ? I'll pass.
    • HarHarVeryFunny 2 hours ago
      Doesn't have to be that way though. As discussed here recently, a basic local agent like Claude Code is only a couple hundred lines of code, and could easily be written by something like Claude Code if you didn't want to do it yourself.

      If you have your own agent, then it can talk to whatever you want - could be OpenRouter configured to some free model, or could be to a local model too. If the local model wasn't knowledgeable enough for sysadmin you could perhaps use installable skills (scripts/programs) for sysadmin tasks, with those having been written by a more powerful model/agent.

  • nojs 4 hours ago
    This post is spot on, the combo of tailscale + Claude Code is a game changer. This is particularly true for companies as well.

    CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.

    It also means you can literally host the tools on a server in your office, if you really want to.

    Putting CC on the server makes this set up even better. It’s extremely good at system admin.

  • StrLght 5 hours ago
    > Your home server's new sysadmin: Claude Code

    (In)famous last words?

  • elemdos 5 hours ago
    I’ve also found AI to be super helpful for self-hosting but in a different way. I set up a Pocketbase instance with a Lovable-like app on top (repo here: https://github.com/tinykit-studio/tinykit) so I can just pull out my phone, vibecode something, and then instantly host it on the one server with a bunch of other apps. I’ve built a bunch of stuff for myself (journal, CRM, guitar tuner) but my favorite thing has been a period tracker for a close friend who didn’t want that data tracked + sold.
  • jawns 2 hours ago
    Remember: In all likelihood, your residential ISP does not permit you to operate a server.

    Granted, that's rarely enforced, but if you're a stickler for that sort of thing, check your ISP's Acceptable Use Policy.

  • bicepjai 5 hours ago
    I feel the same way. I now have around 7 projects hosted on a home server with Coolify + Cloudflare. Always worry about security and I have seen many posts related to self hosting on HN trending recently
    • SchemaLoad 4 hours ago
      For security just don't expose the server to the internet. Either set up wireguard or tailscale. You can set it up in a split tunnel config so your phone only uses the VPN for LAN requests.
      • bicepjai 2 hours ago
        I am expecting Cloudflare Tunnel to take care of security. In fact, that is the only reason I am okay hosting from home. Are you talking about something more on top of Cloudflare Tunnel or extra security features or a replacement?
        • SchemaLoad 2 hours ago
          Cloudflare Tunnel is a very similar solution. Just a different product for the same task.
  • didntknowyou 4 hours ago
    idk exposing your home network to the world and trusting AI will produce secure code is not a risk I want to take
  • wantlotsofcurry 3 hours ago
    Was this article written entirely by Claude for the most part? It definitely reads like it was.
  • 1shooner 5 hours ago
    Others here mention Coolify for a homeserver. If you're looking for turnkey docker-compose based apps rather than just framework/runtime environments, I will recommend the runtipi project. I have found it to be simple and flexible. It offers an 'app store' like interface, and supports hosting your own app store. It manages certs and reverse proxy via traefik as well.

    https://runtipi.io/

  • amelius 5 hours ago
    > The reason is simple: CLI agents like Claude Code make self-hosting on a cheapo home server dramatically easier and actually fun.

    But I want to host an LLM.

  • easterncalculus 5 hours ago
    Nice. This is a great start. The next steps are backups and regular security updates. The former is probably pretty easy with Claude and a provider like Backblaze, for updates I wonder if "check for security issues with my software and update anything in need" will work well (and most importantly, how consistently). Alternatively, getting the AI to threat model and perform any docker hardening measures.

    Then someday we self-host the AI itself, and it all comes together.

    • zrail 4 hours ago
      My security update system is straightforward but it took quite a lot of thought to get here.

      My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.

      I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.

      Thus, Renovate keeps me up to date and git keeps everyone honest.

  • walterraj 2 hours ago
    I have a hard time reading things like “The last one is the real unlock.” or “That alone justified the box.” without immediately thinking of an AI trying to explain something. Not to say this was written with one, but the frequency with which I see phrasing like this nowadays is skyrocketing...
  • sprainedankles 6 hours ago
    Impeccable timing, I finally got around to putting some old hardware to use and getting a home assistant instance (and jellyfin, and immich, and nextcloud, ...) set up over winter break. Claude (and tailscale) saved hours of my time and enabled me to build enough momentum to get things configured. It's now feasible for me to spend 15-20 minutes knocking down homeserver tasks that I otherwise would've ignored. Quite fun!
  • jaime-ez 3 hours ago
    has any one experience using cloudflare tunnels in a (small scale - 5000 user/day) self hosted web service? I just got 2 dynabook XJ-40 (32 gb ram, 512 gb ssd) for 200 usd each and I'm going to replace my DO droplets with them (usd150+ per month). I plan to use cloudflare tunnel to make the service available to the internet without exposing my home network. Any downsides ? (besides that cloudflare will be MITM for the service but it is not a privacy focused business)
  • hinkley 6 hours ago
    What I’d really like is to run the admin interface for an app on a self hosted system behind firewalls, and push read replicas out into the cloud. But I haven’t seen a database where the master pushes data to the replicas instead of the replicas contacting the master. Which creates some pretty substantial tunneling problems that I don’t really want on my home network.

    Is there a replica implementation that works in the direction I want?

    • chasing0entropy 4 hours ago
      Use NAT hole punching if you're advanced, or you could fall back to IP/port filtering
    • bakies 4 hours ago
      Tailscale will take care of the networking if you install it in both locations.
  • sciences44 6 hours ago
    Interesting subject, thank you! I have a cluster of 2 Orange Pis (16 GB RAM each) plus a Raspberry Pi. I think it's high time to get them back on my desk. I never had time to get very far with the setup due to a lack of time. It took so long to write the Ansible scripts/playbooks, but with Claude Code, it's worth a try now. So thanks for the article; it makes me want to dust it off!
  • austin-cheney 5 hours ago
    I have found that storage is up in price more than 60% from last year.

    I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio

  • Gualdrapo 6 hours ago
    One day when I have some extra bucks I'd try to get a home server running, but the idea of having something eating grid electricity 24/7 doesn't seem to play along well with this 3rd world budget. Are there some foolproof and not so costly off-grid/solar setups to look at (like a Raspberry-based thingy or similar)?
    • imiric 4 hours ago
      Your fridge and other home appliances likely use much more power than whatever a small server would. The mini PC in the article is very power efficient. You likely won't notice it in your power bill, regardless of your budget. You could go with a solar-powered setup if you prefer, but IMO for this type of use case it would be overengineering.
    • noname120 4 hours ago
      Mac Mini (M1 and later) under Asahi Linux just uses 5 W for a normal workload. If you push it to 100% of CPU it reaches 20 W. That’s very little.
      • SchemaLoad 4 hours ago
        Only thing is you can't run Proxmox which makes self hosting much better, and you'll be limited to ARM builds, which on server is at least a lot easier than trying to run desktop apps. Modern micro desktops are also fairly power efficient, perhaps not quite as low as the mac, but much lower than a regular gaming desktop idling.

        Avoid stacking in too many hard drives since each one uses almost as much power as the desktop does at idle.

      • atahanacar 4 hours ago
        I doubt anyone who is too tight on cash that they have to think about the electricity cost of a home server can afford a Mac.
  • notesinthefield 6 hours ago
    I find myself a bit overwhelmed with hardware options during recent explorations. Seemingly everything can handle what I want a local copy of my Bandcamp archive to stream via jellyfin. Good times we’re in but even having good sysadmin skills, I wish someone would just tell me exactly what to buy.
    • devonhk 5 hours ago
      > I wish someone would just tell me exactly what to buy.

      I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.

      I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.

      • SchemaLoad 4 hours ago
        I spent so long trying to make Raspberry Pis work but they just kind of suck and everything is harder on them. I only just discovered that there are an infinite supply of these micro desktops second hand from offices/government. I was able to pick up a 9th gen intel with 16gb ram for less than the cost of a Pi 5, and it's massively more powerful.
        • devonhk 4 hours ago
          Yeah, they’re amazing value. I paid $125 CAD for a 4th gen i7 with 16GB of RAM about 5 years ago. It’s been running almost 24/7 ever since with no issues.
          • SchemaLoad 3 hours ago
            You also don't have to deal with the usual annoyance of second hand gear like facebook marketplace and no delivery. These companies / governments have contracts with reseller companies who will buy the entire stock and sell them online just like buying new.
        • jacobthesnakob 4 hours ago
          Pi’s are incredible little basic home servers but they can’t handle transcoding. Great option for places with very expensive electricity too.
          • SchemaLoad 4 hours ago
            I just found their proprietary hardware and being ARM too limiting. I wanted to set up full disk encryption to set up nextcloud on, and found that on the pi this is an incredibly complex process. While on an x86 PC it's just a checkbox on install.

            And then you can only use distros which have a raspberry pi specific build. Generic ARM ones won't work.

            • jacobthesnakob 4 hours ago
              Yeah the complaints are fair. I stick to RPi OS for maximum compatibility. People have been crying for a Google Drive client for Linux for over a decade, but still have to set it up in rclone.

              I build out my server in Docker and I’ve been surprised that every image I’ve ever wanted to download has an ARM image.

          • drnick1 3 hours ago
            Way too expensive for their moderate performance. All serious self-hosters (not Youtube home-labbers) use x86 machines, often retired desktop/gaming rigs or used datacenter hardware.
            • jacobthesnakob 2 hours ago
              What is a “serious self hoster”? How many Docker containers do I need to be running on my Pi 5 to get into the club?
      • notesinthefield 4 hours ago
        I forgot all about these after I stopped doing desktop support, thanks!
    • rr808 1 hour ago
      Get started a corporate surplus mini pc on ebay. They super cheap - search for micro pc - if you get a recent CPU from Dell or Lenovo should be under $200, you can install Fedora or other Linux distribution. Ask Claude for everything else.
  • CuriouslyC 4 hours ago
    Tailscale is pretty sweet. Cloudflare WARP is also pretty sweet, a little clunkier but you get argo routing for free and I trust Cloudflare for security.
  • nick2k3 4 hours ago
    All fine and great with Tailscale until you company places an iOS restriction on external VPNs and your work phone is also your primary phone :(
    • ivanjermakov 4 hours ago
      Usually you can ask for a separate phone for work. I can't stand when personal devices are poisoned with Intune and other company crap.
    • jacobthesnakob 4 hours ago
      My work WiFi blocked traffic to port 51820, the default WireGuard port. I was wondering why my VPN started failing to handshake one day. I changed my ports to 51821 that night and back in business. I checked our technology policy and there’s no “thou shalt not use a VPN” clause so no clue why someone one day decided to drop WireGuard traffic on the network.
  • cafebeen 6 hours ago
    This is great and echoes my experience. Although I would add a caveat that this mostly applies to solo work. Once you need to collaborate or operate on a team, many of limits of self-hosting return.
  • Sirikon 3 hours ago
    Self hosting post. Tailscale.

    Its comedic at this point.

  • reachableceo 6 hours ago
    Cloudron makes this even easier. Well worth 1.00 a day! Handles the entire stack (backups , monitoring , dns , ssl , updates ).
  • Dbtabachnik 3 hours ago
    How is readcheck any different than using raindrop.io?
  • zebnyc 5 hours ago
    Basic question: If I wanted a simple self hosting solution for a bot with a database, what is the simplest solution / provider I can go with. This bot is just for me doesn't need to be accessible to the general public.

    Thanks

  • krupan 2 hours ago
    Oh my gosh, everything you want to host comes with a docker compose file that requires you to tweak maybe two settings. Caddy as your web proxy has the absolute simplest setup possible. You don't need AI to help you with this. You got this. You want to make sure you understand the basics so you (or your LLM doesn't do anything brain dead stupid). It's not that hard, you can do it!
  • e2e4 6 hours ago
    My stack. Claude code working via CLIs: Coolify on hetzner
  • journal 3 hours ago
    none of you have what it takes to self host your perfect self hosting fantasy because most of you won't cooperate with others. keep waiting for that unicorn you wouldn't see standing right in front of you.
  • drnick1 1 hour ago
    Reminder: If you are using Tailscale or a VPS you aren't really self-hosting.
  • megous 2 hours ago
    My idea of fun is deeply tied to understanding how things work—learning them, then applying that knowledge in my own way, as simply as possible. That process gives me a sense of ownership and control, which is not something I get from an approach where AI does things for me that I do not understand.
  • tamimio 3 hours ago
    Nope, never trust AI to do such things, it’s imminent to cause issues. Maybe as an assistant only but never installed on the same server and worse, the privilege to access/execute commands.
  • RicoElectrico 4 hours ago
    I just use Proxmox on Optiplex 3060 micro. On it, a Wireguard tunnel for remote admin. The ease of creating and tearing down dedicated containers makes it easy to experiment.
    • esbeeb 2 hours ago
      I too have that same Dell Optiplex 3060 micro. I love it for experimenting also. Also use wireguard for remote access. I use incus for my Linux containers, preferring it to proxmox.
  • syndacks 4 hours ago
    Can the same thing be said for using docker compose etc on a VPS to host a web app? Ie you can get the ergonomic / ease of using Fly, Renderer?

    Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”

    CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).

    So the tradeoff has changed from:

    “Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”

  • cryptica 5 hours ago
    I started self-hosting after noticing that my AWS bill increased from like $300 per month to $600 per month within a couple of years. When looking at my bill, 3/4 of the cost was 'AWS Other'; mostly bandwidth. I couldn't understand why I was paying so much for bandwidth given that all my database instances ran on the same host as the app servers and I didn't have any regular communication between instances.

    I suspect it may have been related to the Network File System (NFS)? Like whenever I read a file on the host machine, it goes across the data-center network and charges me? Is this correct?

    Anyway, I just decided to take control of those costs. Took me 2 weeks of part-time work to migrate all my stuff to a self-hosted machine. I put everything behind Cloudflare with a load balancer. Was a bit tricky to configure as I'm hosting multiple domains from the same machine. It's a small form factor PC tower with 20 CPU cores; easily runs all my stuff though. In 2 months, I already recouped the full cost of the machine through savings in my AWS bill. Now I pay like $10 a month to Cloudflare and even that's basically an optional cost. I strongly recommend.

    Anyway it's impressive how AWS costs had been creeping slowly and imperceptibly over time. With my own machine, I now have way more compute than I need. I did a calculation and figured out that to get the same CPU capacity (no throttling, no bandwidth limitations) on AWS, I would have to pay like $1400 per month... But amortized over 4 years my machine's cost is like $20 per month plus $5 per month to get a static IP address. I didn't need to change my internet plan other than that. So AWS EC2 represented a 56x cost factor. It's mind-boggling.

    I think it's one of these costs that I kind of brushed under the carpet as "It's an investment." But eventually, this cost became a topic of conversation with my wife and she started making jokes about our contribution to Jeff Bezos' wife's diamond ring. Then it came to our attention that his megayacht is so large that it comes with a second yacht beside it. Then I understood where he got it all from. Though to be fair to him, he is a truly great businessman; he didn't get it from institutional money or complex hidden political scheme; he got it fair and square through a very clever business plan.

    Over 5 years or so that I've been using AWS, the costs had been flat. Meanwhile the costs of the underlying hardware had dropped to like 1/56th... and I didn't even notice. Is anything more profitable than apathy and neglect?

    • jdsully 4 hours ago
      The most likely culprit was talking to other nodes via their public IP instead of their local ones. That gets billed as interent traffic (most expensive). The second culprit is your database or other nodes are in different AZs and you get a x-zone bandwidth charge.

      Bandwidth inside the same zone is free.

  • holyknight 6 hours ago
    not with these hardware prices...
    • SchemaLoad 4 hours ago
      Second hand micro desktops are still cheap, at least for now.
    • drnick1 3 hours ago
      Hardware that is considered e-waste (like a Core 2 Duo) makes a wonderful home server.
      • SchemaLoad 3 hours ago
        You can go much newer than that and get semi modern intel chips second hand. For something that runs 24/7, the power cost will exceed the savings from using long obsolete chips.
  • efilife 6 hours ago
    how many times will I get clickbaited by some cool title only to see AI praise in the article and nothing more? It's tiring and happens way too often

    related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/

    Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well

    • jacobthesnakob 4 hours ago
      Maybe because I don’t do SWE for my job, but I have fun writing docker-compose files, troubleshooting them, and adding containers to my server. Then I understand how/why stuff works if it breaks, why would I want to hand that over to an AI?

      Waiting for the follow-on article “Claude Code reformatted my NAS and I lost my entire media collection.”

      • chasing0entropy 4 hours ago
        ROFL. There have been at least two posts of Claude without confirmation deleting a repository and one where it wiped an entire partition
    • keybored 6 hours ago
      Everything is now not-niche but on the cusp of hitting the mainstream. Like Formal Methods.[1] But they were nice enough to put it in the title. Then tptacek replied that he “called it a little bit” because of: Did Semgrep Just Get A Lot More Interesting?[2] (Why? What could the reason be?)

      [1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...

      [2]: https://fly.io/blog/semgrep-but-for-real-now/

  • fassssst 4 hours ago
    Umm, what happened to zero trust? Network security is not sufficient.
  • minihoster 6 hours ago
    [dead]
  • tkgally 2 hours ago
    I used Claude Code just yesterday in a similar way: to solve a computer problem that I previously would have tried googling.

    I had a 30-year-old file on my Mac that I wanted to read the content of. I had created it in some kind of word processing software, but I couldn’t remember which (Nexus? Word? MacWrite? ClarisWorks? EGWORD?) and the file didn’t have an extension. I couldn’t read its content in any of the applications I have on my Mac now.

    So I pointed CC at it and asked what it could tell me about the file. It looked inside the file data, identified the file type and the multiple character encodings in it, and went through a couple of conversion steps before outputting as clean plain text what I had written in 1996.

    Maybe I could have found a utility on the web to do the same thing, but CC felt much quicker and easier.