I was once on a video call with my sister, walking around the house and getting more and more frustrated as I did so. Eventually she asked me what I was looking for.
“I CAN’T FIND MY GODDAMN PHONE!”
She burst out laughing.
I was once on a video call with my sister, walking around the house and getting more and more frustrated as I did so. Eventually she asked me what I was looking for.
“I CAN’T FIND MY GODDAMN PHONE!”
She burst out laughing.
…that’s not what they’re doing though?
Those patches get either pulled from upstream or built in-house and shared to upstream. Just like in Debian, and just like in the regular Ubuntu releases, the package is based on some upstream version and then the deb packaging applies the patch sets as listed in the diff tarball.
Here’s what the latest kernel for Ubuntu 26.04 look like: https://launchpad.net/ubuntu/+source/linux/6.17.0-6.6
Those same tarballs are available for any Ubuntu package by running apt source <pkg> as long as you’ve configured the matching deb-src repositories.
You mean things like cloud-init, juju, a ton of work they do directly upstream on openstack, hardware certifications (which include things like getting vendors to upstream their drivers into the mainline kernel — something even Google has struggled with for Android), and making it more feasible for more companies to run Linux by providing the sort of long-term support that the community just doesn’t prioritise?
Red Hat charge for access to the RHEL binaries. That’s literally why CentOS came into existence.
Everything that’s in main gets released to everyone with the security fixes. Canonical’s security team works on those.
The stuff in the universe repo is owned by the Ubuntu community (not by Canonical), so anyone can submit those fixes, but it depends on the Masters of the Universe, who are all volunteers, to get it upstreamed.
The extra Ubuntu Pro updates for the universe repo come from when someone who’s paying for Ubuntu Pro asks Canonical to make a patch. The source is still available to anyone, so someone could take that patch and then submit it to the community who maintains the universe repo.
Once the 5 years of standard support ends, then the only way to get additional fixes is through Ubuntu Pro, but if Canonical writes those fixes they also submit them back upstream (as opposed to if they grab a specific patch from upstream — and even then it’s still available on Launchpad regardless.
The reason nobody’s made a CentOS but for Ubuntu Pro is that it’s way easier to submit the patches through the community (and become part of that community who approves packages) than it is to spin up all the infrastructure that would be needed.


As someone who owns several RISC-V devices the primary thing preventing usable (low end) RISC-V laptops is the GPUs. Most RISC-V silicon has Imagination GPUs, and the current state of the drivers there is “proprietary drivers stuck on an old LTS kernel.”
If someone makes an RVA23 compliant chip with open mainstreamable drivers and a BXS-4-64 GPU (or, better yet, somehow manages to license a GPU from Intel or AMD for it), that’ll be a cash cow.
If by “WRECK” you mean “improve” yes.
Meanwhile I’m here on Wayland because it does things that x11 doesn’t.
My OS came with an officially packed (by Mozilla) non LTS version of Firefox that gets regular version upgrades.
They don’t in general, but things that do heavily detailed graphics work (like your compositor or browser) or lots of cryptography work on the CPU can get a bit more out of those newer instructions than many other programs.
Very approximately, things that Gentoo offers prebuilt versions of because compiling them is so resource intensive are often the things that can get the best benefit out of your architecture variant. (Not singling out Gentoo here as an example of “doing it badly” - they do the sensible thing by providing these prebuilt binaries, but in some ways it defeats the purpose of optimised source distributions.)
It’s a Hard Problem™ to solve.


Services I know that have both HTTPS and SSH access have seen all sorts of weird stuff seemingly related to LLM bot scraping over the past few months. Enough to bring down some git servers.


I solved this by using Linux anyway and being way more productive than other folks.
Look I don’t have heat in the winter so I compile Firefox for various processors to keep my bedroom warm okay?
The irony is that big things like Firefox can get the most advantages from building for your specific CPU variant, especially if you use them frequently.


For fun.


I don’t know what’s worse: the fact that nobody at Microsoft registered that short URL for the lulz or the resulting destination for short URLs that don’t get found.


Maybe something like OpenStack?
Everything on the system, including the desktop, kernel, and CUPS, can be installed by snap.


Turns out hosting a bunch of files is very cheap.
I think OP is talking about the fact that most new projects use “main” now, so “master” likely indicates an older project.