I don’t use the Arr suite or Qbit (nor do I really torrent that much) so I can’t speak on the second part, but for scaling I use Ceph. I currently have about 95 TiB across 3 machines and based on my experience, scaling it up further (ie adding more to a machine or adding new machines) is fairly straightforward and relatively simple. That said, I have my cluster set up to make 1 copy of data across each machine and have a few TiB reserved for metadata so I only have about 29 TiB for unique object storage but that kind of setup isn’t strictly necessary. You could set up your own cluster such that there’s no redundancy and utilize most of your available storage for unique objects (a relatively small portion of it will still need to go to metadata if you want to set up a Ceph filesystem, though, but that also isn’t necessary).
I don’t use the Arr suite or Qbit (nor do I really torrent that much) so I can’t speak on the second part, but for scaling I use Ceph. I currently have about 95 TiB across 3 machines and based on my experience, scaling it up further (ie adding more to a machine or adding new machines) is fairly straightforward and relatively simple. That said, I have my cluster set up to make 1 copy of data across each machine and have a few TiB reserved for metadata so I only have about 29 TiB for unique object storage but that kind of setup isn’t strictly necessary. You could set up your own cluster such that there’s no redundancy and utilize most of your available storage for unique objects (a relatively small portion of it will still need to go to metadata if you want to set up a Ceph filesystem, though, but that also isn’t necessary).