

Seeding works fine without port forwarding. Just won’t be connecting to as many peers.


Seeding works fine without port forwarding. Just won’t be connecting to as many peers.


For normal use like that 16GB is generally just fine. Some games can use enough that you’ll need to close Firefox and other RAM hungry programs though.
As far as needing more than that, people who do heavy design work or edit videos and that kind of thing generally do. For example 32GB running Fusion in Davinci Resolve can be a bit limiting sometimes with higher resolution or 10 bit footage.


US Mobile let’s you pick from all 3 main carriers, and I think their newest plans let you use 2 carriers at the same time too.
Its also quite cheap.
I use it but just on the T-Mobile network because thats the only one that works where I am.


Also Thunderbird, but specifically the Betterbird fork.
It works well, its fast, its lightweight (like 100-200MB of RAM), and has lots of features.
I also have my calendar in it.


Another day, another unreal engine game with massive performance issues.


Oh I see what you mean yeah, I’ve never used NFS before with it.


NAT traversal isn’t seeing any of your data, its just a service to enable clients behind NAT to talk to each other and make a direct connection for data transfer.
Local Discovery probably uses broadcasts and maybe mDNS to discover other syncthing clients on the same local network.
Global discovery is essentially a database of clients so they can find each other over the internet. This lets your client connect home when out on your phone and such.
But all of the actual data transfer is happening directly client to client. As long as relaying is disabled.


Tailscale or Zerotier are the current best options I think.


Yeah it sounds nice but too much time investment for me.
I can install PBS client on any system but it requires manual setup and scheduling which I don’t want to do. When used with Proxmox that’s all handled for me.
Also I don’t think Proxmox cares about storage either, I just use ZFS which is completely standard under the hood.


No backup utility like PBS though, thats why I haven’t switched.


Intel AMT also works for out of band management on consumer hardware.


I don’t think I’ve ever had a quality brand PSU go out on me. Software RAID like MD or ZFS works fine on basically any hardware, and I wouldn’t use hardware RAID these days anyways.
I used to worry about that stuff and use enterprise hardware, but its just so expensive for decent performance, and so power hungry.
Like try and match even a budget i3-12100 or similar for single thread performance (needed for game servers mostly) and you really can’t with used enterprise gear. Plus that i3 has an iGPU that can handle a ton of transcoding tasks, and ML for stuff like immich search or frigate object detection. And it uses about 10w or less most of the time.


Or even:


Yeah media is a good use case for it, and doesnt really need cache either.


It can’t, you lose space efficiency if the disks you add aren’t the same size as the old disks.


It has no parity, you can pair with snapraid but thats snapshot parity and not real-time parity. Depends on the use case if that would work or not.
Also no caching options.


The difference is I can do something about my downtime and fix it.


Linux/opensource naming can be the wildest stuff.


The big thing is very easily mix and match different sizes of disks. ZFS as of recently can sort of do that, but its not as efficient.
The reason as I understand is better performance and reliability, by ditching PHP which is what causes most of Nextclouds problems.