that’s definitely not in the range of like, super old cpus, but it’s also not super fast either. Modern cpus should be like 20-30% faster i think, in single core, which is what compression uses.
Realistically compression should be as aggressive as possible, because it saves bandwidth, and it’s basically a free resource,
Sure, and I have no issues with compression or encryption on the device. In fact, I used full-disk encryption for a few years and had zero issues, and I’ve done plenty of compression and decompression as well (in fact, I think my new OS uses compression at the FS layer). Most of the time, that stuff is completely a non-issue since CPUs have special instructions for common algorithms, I think they’re just using something fancy that doesn’t have hardware acceleration on my CPU or something.
I’m planning to replace it, but it still works well for what I need it for: Minecraft for the kids, Rust Dev for me, and indie games and videos every so often. I’m on integrated graphics and it’s still holding up well.
it’s my understanding that on disk compression is different from networked compression, usually networked compression uses Gzip iirc, where as on disk tends to use something like LZ, file downloads are generally less important than a file system, so you can trivially get away with costly and expensive compression.
Yeah, because you’re optimizing for different things.
server - maximize requests handled, so you need to balance CPU usage and bandwidth
disk - more bottlenecked by disk speed and capacity, so spending more time compressing is fine, provided decompression is fast (reads should exceed writes)
the server isn’t live compressing it, it’s pre compressed binaries being shipped hundreds of thousands of times over, in most cases. Compression is primarily to minimize bandwidth (and also speed up downloads, since the network speed is usually the bottleneck) you can either cache the compressed files, or do a gated download, based on decompression speed.
Usually, most disks are faster than any network connection available, so it’s pretty hard to hit that bottleneck these days. HDDs included, unless you’re using SMR drives in a specific use case, and definitely not an SSD ever.
Although on the FS side, you would optimize for minimum latency, latency really fucks up a file system, that and corrupt data, so if you can ensure a minimal latency impact, as well as a reliable compression/decompression algorithm, you can get a decent trade off of some size optimization, for a bit of latency, and CPU time.
Whether or not fs based compression is good, i’m not quite sure yet, i’m bigger on de-duplication personally.
It is for generated data, like a JSON API. Static content is often pre-compressed though, since there’s no reason to do that every request if it can be done once. Compression formats is largely limited to whatever the client supports, and gzip works pretty much everywhere, so it’s generally preferred.
At least that’s my understanding. Every project I’ve worked on has a pretty small userbase, so something like 50-100 concurrent users (mostly B2B projects), meaning we didn’t have the same problems as something like a CDN might have.
I’m not really sure how latency is related for FS operations. Are you saying if the CPU is lagging behind the read speed, it’ll mess up the stream? Or are you saying something else? I’m not an expert on filesystems.
that’s definitely not in the range of like, super old cpus, but it’s also not super fast either. Modern cpus should be like 20-30% faster i think, in single core, which is what compression uses.
Realistically compression should be as aggressive as possible, because it saves bandwidth, and it’s basically a free resource,
Sure, and I have no issues with compression or encryption on the device. In fact, I used full-disk encryption for a few years and had zero issues, and I’ve done plenty of compression and decompression as well (in fact, I think my new OS uses compression at the FS layer). Most of the time, that stuff is completely a non-issue since CPUs have special instructions for common algorithms, I think they’re just using something fancy that doesn’t have hardware acceleration on my CPU or something.
I’m planning to replace it, but it still works well for what I need it for: Minecraft for the kids, Rust Dev for me, and indie games and videos every so often. I’m on integrated graphics and it’s still holding up well.
it’s my understanding that on disk compression is different from networked compression, usually networked compression uses Gzip iirc, where as on disk tends to use something like LZ, file downloads are generally less important than a file system, so you can trivially get away with costly and expensive compression.
Yeah, because you’re optimizing for different things.
the server isn’t live compressing it, it’s pre compressed binaries being shipped hundreds of thousands of times over, in most cases. Compression is primarily to minimize bandwidth (and also speed up downloads, since the network speed is usually the bottleneck) you can either cache the compressed files, or do a gated download, based on decompression speed.
Usually, most disks are faster than any network connection available, so it’s pretty hard to hit that bottleneck these days. HDDs included, unless you’re using SMR drives in a specific use case, and definitely not an SSD ever.
Although on the FS side, you would optimize for minimum latency, latency really fucks up a file system, that and corrupt data, so if you can ensure a minimal latency impact, as well as a reliable compression/decompression algorithm, you can get a decent trade off of some size optimization, for a bit of latency, and CPU time.
Whether or not fs based compression is good, i’m not quite sure yet, i’m bigger on de-duplication personally.
It is for generated data, like a JSON API. Static content is often pre-compressed though, since there’s no reason to do that every request if it can be done once. Compression formats is largely limited to whatever the client supports, and gzip works pretty much everywhere, so it’s generally preferred.
At least that’s my understanding. Every project I’ve worked on has a pretty small userbase, so something like 50-100 concurrent users (mostly B2B projects), meaning we didn’t have the same problems as something like a CDN might have.
I’m not really sure how latency is related for FS operations. Are you saying if the CPU is lagging behind the read speed, it’ll mess up the stream? Or are you saying something else? I’m not an expert on filesystems.