• Rob Bos@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      5 days ago

      At one point I rebuilt a server by fully abandoning the package database and reinstalling everything as overwrites. Converted a slackware install into a Debian install in situ by cannibalizing it from the inside out. Pretty proud of that one, even 20 years later.

      edit: oh gods… more like 24.

    • utopiah@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 days ago

      upgrades when you’ve neglected a server

      In times of containers, does it even matter?

      Edit: to clarify, yes you MUST keep your server up to date (and have backups) but what I’m questioning is… if the OS a server actually matters much when most of the actual “serving” is done by containers, which might themselves get updates, or not, but are isolated.

      • cybersin@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        Yes, it matters.

        Also, the actual isolation of container environments varies greatly, on a per container basis. Containers are far less isolated than virtual machines, and virtual machines are less isolated than separate hosts.

        Neither containers or VMs will will protect from attacks on the host, see regreSSHion. You may be able to limit access to your host by using containers or VMs, but container escapes and VM escapes are not impossible.

        There is much time and effort required to maintain each of these layers. With “stable” distros like Debian, It is often the responsibility of the distribution to provide fixes for the packages they provide.

        Given Debian as the example, you are relying on the Debian package maintainer and Debian security team to address vulnerabilities by manually backporting security patches from the current software version to whatever ancient (stable) version of the package is in use, which can take much time and effort.

        While Debian has a large community, it may be unwise to use a “stable” distro with few resources for maintaining packages.

        OTOH, bleeding edge distros like Arch get many of their patches directly from the original author as a new version release, placing a lower burden on package maintainers. However, rolling releases can be more vulnerable to supply chain attacks like the XZ backdoor due to their frequent updates.

        • utopiah@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          4 days ago

          Thanks for the in depth clarification. I had in mind how quick re-installing a system was after a failure but indeed security itself is fundamental.

          So to try to better gauge the risk here when you say

          container escapes and VM escapes are not impossible.

          what level of efforts are you talking about here? State level 0-day required with team of actual humans trying to hack? Script kiddy downloading Kali and playing for 1h? Something totally automated perpetually scanning the Internet in minutes and owning you without even caring for who you are?

          I did read about blue pilling years ago (damn just checked, nearly 20 years ago https://en.wikipedia.org/wiki/Blue_Pill_(software) ) but it seems that since it’s the 1 thing solutions like Docker, Podman, etc and VM propers (and then the underlying hardware) have to worry about, it feels like it would be like trying to break-in by focus on a lock rather than breaking a window, namely the “hard” part of the setup.