• Pup Biru@aussie.zone
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    edit-2
    2 days ago

    this seems needlessly combative… prevailing opinions are exactly as signal says… think differently? great! let’s do it, talk about it, see how it goes, and when the solution has scaled in the real world to what it’s competing against then you can feel superior as the one that had the vision to see it

    but scaling is hard, and distributed tech is hugely inefficient

    there are so many unknowns

    anyone can follow a random “getting stared with web framework X” guide to make a twitter clone… making a twitter clone that handles the throughput that twitter does, that takes legitimately hard computer science (fuck twitter, but it remains both a good and common example)

    heck even lemmy has huge issues with sync at its current tiny scale when there’s any reasonable latency involved… i remember only months ago when aussie.zone was getting updates days late because of a 300ms latency to EU/US and lemmys sequential handling of outboxes (afaik)

    • Jason2357@lemmy.ca
      link
      fedilink
      arrow-up
      10
      ·
      2 days ago

      Indeed! Ever since XMPP was argued to be superior to everything else, I’ve come to just say “build it and show us.” No one cares about having multiple chat apps on their devices -if it’s good enough, it will be added along side Signal first, then replace it only when it’s clearly better.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        right? like yeah i remember XMPP being cool n all, but all the experiences suuuuucked, not to mention (back in the day… i think its fixed now?) figuring out how the hell to get video calling working… “what extension does your client support?” is not a question a lay-person will ask: centralised systems don’t have extensions… they have “the way it’s done” and that’s it

    • entwine@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      but scaling is hard, and distributed tech is hugely inefficient

      How is it inefficient for a chat app? If anything, a distributed architecture is the ideal for this use case. It’s only potentially a problem if you need to have huge group chats, which is definitely not the common use case for a chat app, but even then I think Delta Chat’s optimized relays can handle that.

      see how it goes, and when the solution has scaled in the real world to what it’s competing against then you can feel superior as the one that had the vision to see it

      Delta chat uses existing email infrastructure, which has already proven its ability to scale. Nigerian princes probably send more emails per hour than the entire global Signal network.

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I’m guessing inefficient in a sense that with distributed you need more computational power in total than with centralised

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 days ago

          inefficient in the sense that

          • traffic go over the internet rather than internal networks which means the routing is much longer, over slower links
          • not to mention that in distributed systems information frequently is duplicated many times rather than referenced on some internal system (sending out an email to 20 people duplicates that email 20 times across many providers rather than simply referencing an internal ID… you can just centralise content and send out a small notification message, but that’s generally not what people are talking about when they’re talking about modern distributed systems)
          • each system can’t trust any other, so there’s a lot more processing that each node has to do in order to maintain a consistent internal state: validating and transforming raw data for itself - not usually a particularly big task, but multiplied by millions per second it adds up fast
          • hardware scaling is simply not as easy either… with centralised systems you have, say, 1000 servers at 95% capacity (whatever that means): you can run them close to capacity because your traffic is generally insulated from load spikes due to volume, and generally you wouldn’t get 5% more load faster than you can scale up another server. in distributed systems (or rather smaller systems, because that’s implicit here unless you’re just running the hardware and software to duplicate the whole network, which would take more servers anyway due to the other inefficiencies and now you’re multiplying them) you need to have much more “room to breathe” to absorb load spikes
          • things like spares and redundancy for outage mitigations also become more expensive: if you have 1000 servers, having a couple of hot spares (either parts or entire systems depending on system architecture and uptime requirements) isn’t that big of a deal but in a distributed system you probably need those hot spares, but all of a sudden every instance needs those hot spares somewhere (though this can be seen as a similar issue to traffic issue: spares of all kinds are just unused capacity, so the higher your ratio the more under-utilised your hardware)
          • this is all without getting into the human effort of building systems… instance owners all need to manage their infrastructure which means that the mechanisms to handle things like upgrade without downtime, scaling, spam protection, bots, etc have all been built many many times

          NONE of this is to say that they’re worse. in many ways the have a lot of advantages, but it’s not a clear-cut win in a lot of cases either… as with most things in life “it depends”. distributed systems are resistant to whole-network outages (at the expense of many more partial network outages), they’re resistant to censorship, they implicitly have a machine to machine interface, so the network as a whole is implicitly automatable (that might be a bad thing for things like spam, privacy, bots, etc), but people tend to generally be pro-bots and pro-3rd party apps

        • entwine@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Idk what to say to this. Is it true? I don’t know, and you probably don’t either. That’s a weird way to look at it and I doubt anyone has measured the power costs of the global email network.

          It’s also useless for decision-making. What matters is a question like “how much would it cost me to host a server and contribute to the network?”. Even if the total global cost is billions of dollars, the network will continue to grow because nobody has to pay all of it.