Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    KISS

    The more complicated the machine the more chances for failure.

    Remote management plus bare metal just works, it’s very simple, and you get the maximum out of the hardware.

    Depending on your use case that could be very important

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    7 days ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      7 days ago

      Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          7 days ago

          True, Docker does it better because any executables also have redundant copies. Running two different node applications on bare metal, they can still disagree about the node version, etc.

          The actual old-school bloat-free way to do it is shared libraries of course. And that shit sucks.

      • communism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.

        • boonhet@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 days ago

          Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.

          Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.

          I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).

  • erock@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    Here’s my homelab journey: https://bower.sh/homelab

    Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

  • nuggie_ss@lemmings.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    7 days ago

    Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.

  • Magiilaro@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

    Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short…

  • OnfireNFS@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

    It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer

    Probably doesn’t work for everyone but it works for me

  • kossa@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

    My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

  • iegod@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    7 days ago

    You sure you mean bare metal here? Bare metal means no OS.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    5
    ·
    9 days ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      8 days ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

        • mesa@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 days ago

          I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.

          That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 days ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    3
    ·
    8 days ago

    “What is stopping you from” <- this is a loaded question.

    We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

    I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

    tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      4
      ·
      8 days ago

      Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    8 days ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      8 days ago

      Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    8 days ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • enumerator4829@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 days ago

    My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

    As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

    Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

    So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 days ago

      A NAS as bare metal makes sense.
      It can then correctly interact with the raw disks.

      You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
      Let a storage device be a storage device, and let a hypervisor be a hypervisor.

    • zod000@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      I feel like this too. I do not feel comfortable using docker containers that I didn’t make myself. And for many people, that defeats the purpose.

  • zod000@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    8 days ago

    Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      Main benefit of Docker for home is Docker compose IMO. Makes it so easy to reuse your configuration

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    8 days ago

    All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:

    1. Peertube
    2. GoToSocial + client
    3. RSS
    4. search engine
    5. A number of custom sites
    6. backups
    7. Matrix server/client
    8. and a whole lot more

    Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

    I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 days ago

      Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.

          • mesa@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.

            It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 days ago

        Freshrss. Sips resources.

        The dd when I want. I have a script I tested a while back. The machine won’t be on yeah. Its just a small image with the software.

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Assuming you run Synapse, that uses more than 1.5GB RAM just idling, your system has at the very least 16GB of RAM… Hardly what I’d call “very tiny”

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 days ago

        …ok so Im lying about my system for…some reason?

        Synapse looks like its using 200M right now. It jumps to 1 GB when being heavily used, but I only use it for piefed and a couple of other local rooms. Honestly its not doing so much for us so we were thinking of getting rid of it. Its irritating to keep having to set up new devices and no one is really using it.

        Peertube is much bigger running around 500MB just doing its thing.

        Its a single family instance.

        # ps -eo user,pid,ppid,cmd,pmem,rss --no-headers --sort=-rss | awk '{if ($2 ~ /^[0-9]+$/ && $6/1024 >= 1) {printf "PID: %s, PPID: %s, Memory consumed (RSS): %.2f MB, Command: ", $2, $3, $6/1024; for (i=4; i<=NF; i++) printf "%s ", $i; printf "\n"}}'  
        PID: 2231, PPID: 1, Memory consumed (RSS): 576.67 MB, Command: peertube 3.6 590508 
        PID: 2228, PPID: 1, Memory consumed (RSS): 378.87 MB, Command: /var/www/gotosocial/gotosoc 2.3 387964 
        PID: 2394, PPID: 1, Memory consumed (RSS): 189.16 MB, Command: /var/www/synapse/venv/bin/p 1.1 193704 
        PID: 678, PPID: 1, Memory consumed (RSS): 52.15 MB, Command: /var/www/synapse/livekit/li 0.3 53404 
        PID: 1917, PPID: 645, Memory consumed (RSS): 45.59 MB, Command: /var/www/fastapi/venv/bin/p 0.2 46680 
        
      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 days ago

        Couple of custom bash scripts for the backups. Ive used ansible at work. Its awesome, but my own stuff doesn’t require any robustness.