that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)
that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)