Red Hat charge for access to the RHEL binaries. That’s literally why CentOS came into existence.
Red Hat charge for access to the RHEL binaries. That’s literally why CentOS came into existence.
Everything that’s in main gets released to everyone with the security fixes. Canonical’s security team works on those.
The stuff in the universe repo is owned by the Ubuntu community (not by Canonical), so anyone can submit those fixes, but it depends on the Masters of the Universe, who are all volunteers, to get it upstreamed.
The extra Ubuntu Pro updates for the universe repo come from when someone who’s paying for Ubuntu Pro asks Canonical to make a patch. The source is still available to anyone, so someone could take that patch and then submit it to the community who maintains the universe repo.
Once the 5 years of standard support ends, then the only way to get additional fixes is through Ubuntu Pro, but if Canonical writes those fixes they also submit them back upstream (as opposed to if they grab a specific patch from upstream — and even then it’s still available on Launchpad regardless.
The reason nobody’s made a CentOS but for Ubuntu Pro is that it’s way easier to submit the patches through the community (and become part of that community who approves packages) than it is to spin up all the infrastructure that would be needed.


As someone who owns several RISC-V devices the primary thing preventing usable (low end) RISC-V laptops is the GPUs. Most RISC-V silicon has Imagination GPUs, and the current state of the drivers there is “proprietary drivers stuck on an old LTS kernel.”
If someone makes an RVA23 compliant chip with open mainstreamable drivers and a BXS-4-64 GPU (or, better yet, somehow manages to license a GPU from Intel or AMD for it), that’ll be a cash cow.
If by “WRECK” you mean “improve” yes.
Meanwhile I’m here on Wayland because it does things that x11 doesn’t.
My OS came with an officially packed (by Mozilla) non LTS version of Firefox that gets regular version upgrades.
They don’t in general, but things that do heavily detailed graphics work (like your compositor or browser) or lots of cryptography work on the CPU can get a bit more out of those newer instructions than many other programs.
Very approximately, things that Gentoo offers prebuilt versions of because compiling them is so resource intensive are often the things that can get the best benefit out of your architecture variant. (Not singling out Gentoo here as an example of “doing it badly” - they do the sensible thing by providing these prebuilt binaries, but in some ways it defeats the purpose of optimised source distributions.)
It’s a Hard Problem™ to solve.


Services I know that have both HTTPS and SSH access have seen all sorts of weird stuff seemingly related to LLM bot scraping over the past few months. Enough to bring down some git servers.


I solved this by using Linux anyway and being way more productive than other folks.
Look I don’t have heat in the winter so I compile Firefox for various processors to keep my bedroom warm okay?
The irony is that big things like Firefox can get the most advantages from building for your specific CPU variant, especially if you use them frequently.


For fun.


I don’t know what’s worse: the fact that nobody at Microsoft registered that short URL for the lulz or the resulting destination for short URLs that don’t get found.


Maybe something like OpenStack?
Everything on the system, including the desktop, kernel, and CUPS, can be installed by snap.


Turns out hosting a bunch of files is very cheap.
I love Lemmy.
I was wondering whether I was going to have to explain that rule to a crowd of angry zealots, furious that I could possibly oppose the Great and Mighty Apple like that.
I’m not opposed to having macs in my collection (though as it so happens right now I don’t have any), because it’s not about hating Apple and entirely about whether I can do something useful with the hardware.
A majority of the ARM hardware I have is old Android phones booting a pretty standard Linux distro with custom kernels. Most of them have drivers missing for various pieces of hardware, but as long as they can boot, connect to my homelab network over USB and run containers, they make excellent build/test devices.


Discourse, not Discord. The accounts are managed through the same SSO that manages Launchpad accounts, so the devs who will use this already have an account.
You mean things like cloud-init, juju, a ton of work they do directly upstream on openstack, hardware certifications (which include things like getting vendors to upstream their drivers into the mainline kernel — something even Google has struggled with for Android), and making it more feasible for more companies to run Linux by providing the sort of long-term support that the community just doesn’t prioritise?