New design sets a high standard for post-quantum readiness.

    • lemmee_in@lemmy.world
      link
      fedilink
      English
      arrow-up
      97
      ·
      1 day ago

      Signal puts a lot of effort into their threat model that assumes a hostile host (i.e. AWS). That’s the whole point of end to end encryption, even if the host is compromised the attackers do not get any information. They even go as far as padding out the lengths of encrypted messages so everyone looks like they are sending identical blocks of data

      • shortwavesurfer@lemmy.zip
        link
        fedilink
        English
        arrow-up
        43
        arrow-down
        3
        ·
        1 day ago

        I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.

        You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          1
          ·
          20 hours ago

          no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 hours ago

              Monero isn’t like the other three, it’s P2P with no single points of failure.

              I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.

              The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.

              • shortwavesurfer@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                ·
                6 hours ago

                Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.

                When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 hours ago

                  Ok, so it’s effectively the same as P2P, just with some guarantees about how many copies you have.

                  In a P2P setup, your data would be distributed based on some mathematical formula such that it’s statistically very unlikely that your data would be lost given N clients disconnect from the network. The larger the network, the more likely your data is to stick around. So think of bittorrent, but you are randomly selected to seed some number of files, in addition to files you explicitly opt into.

                  The risk w/ something like Nostr is if a lot of people pick the same relays, and those relays go down. With the P2P setup I described, data would be distributed according to a mathematical formula, not human decision, so you’re more likely to still have access to that data even if a whole country shuts off its internet or something.

                  Either solution is better than Lemmy/Mastodon or centralized services in terms of surviving something like AWS going down.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              18
              ·
              edit-2
              15 hours ago

              that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account

              centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery

              • shortwavesurfer@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 hours ago

                Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.

                Services might be temporarily degraded, but not gone entirely.

                • Pup Biru@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  11 hours ago

                  but that’s a compromise… it’s not categorically better

                  you can’t run a bank like you run distributed instances, for example

                  services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this

                  and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)

            • Alaknár@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              8
              ·
              12 hours ago

              Come on, mate… Lemmy as a whole didn’t go down, but instances of Lemmy absolutely did go down. As they regularly do, because shit happens.

        • heysoundude@eviltoast.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away

      • Victor@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        sending identical blocks of data

        Nitpicking here but assuming from the previous words in your comment that you mean blocks of data of identical length.

        Although it should be as if we are sending multiples of identical size, I suppose.

        Anyway, sorry for nitpicking.

      • DiabolicalBird@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 hours ago

        I did, it’s a buggy undercooked mess that doesn’t work half the time. The app that’s officially supported is missing half the features. Trying to get people to switch to it is like pulling teeth as the onboarding process in overly complicated for the average user.

        Meanwhile Signal works right out of the box with very little fuss.

      • JoshuaFalken@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        I could. Presumably so could the others commenting on this post. But then what are we to do about the privacy or tech illiterate people we’ve carried to Signal over the years?

        It’s easy to winge about just doing what you perceive as the optimal solution. It’s more difficult when you need to navigate the path to get there from where we are now.

    • elvis_depresley@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      I guess the research doesn’t have to be limited to signal. If other apps can benefit from it the more resilient “private communications over the internet” get.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      So that’s why Signal didn’t send my messages very quickly today then, maybe.

      • DaGeek247@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        It’s not completely out yet. That was likely AWS being down.

        Also, the new quantum protected message encryption headers are about 2kb. If that’s causing issues with your internet, you may want to consider looking at new internet.

        • Lumisal@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          The average for a person sending / receiving a message is about 35 / day. That’s 70kb / person.

          Signal has aprx. 100 million users.

          Which means this adds about 7 terabytes daily.

          Just doing the math on it, there’s no point to this message 😁

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          2kb? While it may not sound like much, that’s at least three packets worth of data (depending on MTU). If you think about it in terms of how TCP sends packets and needs ACKs, there’s actually a lot of round trip data processing going on for just that one part.

          • xthexder@l.sw0.com
            link
            fedilink
            English
            arrow-up
            8
            ·
            23 hours ago

            TCP will generally send up to 10 packets immediately without waiting for the ACKs (depending on the configured window size).

            Generally any messages or websites under 14kb will be transmitted in a single round-trip assuming no packets are dropped.

        • Victor@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 day ago

          That was likely AWS being down.

          Sorry, yeah, that’s the only thing I was referring to.

          My internet connection is 500/500 Mbps, and I can’t change it. 😄👍

          • naticus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            Should have been pretty obvious to anyone reading any tech news whatsoever today, especially in the context of where you responded. No apology from you should have been necessary!

            • Victor@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              You would think 😅 The sorry was sightly sarcastic, but shhh, nobody need know