

The problem is that the volume of slop available completely overwhelms all efforts at quality control. Zealotry only goes so far at turning back the tsunami of shite.


The problem is that the volume of slop available completely overwhelms all efforts at quality control. Zealotry only goes so far at turning back the tsunami of shite.


Indeed.
In some ways, this kind of thing is ideal for Rust. It’s at it best when you’ve a good idea of what your data looks like, and you know where it’s coming from and going to, and what you really want is a clean implementation that you know has no mistakes. Reimplementing ‘core code’ that hasn’t changed much in twenty years to get rid of any foolish overflows or use-after-free bugs is perfect for it.
Using Rust for exploratory coding, or when the requirements keep changing? I think you’ve picked the wrong tool for the job. Invalidate a major assumption and have to rewrite the whole damn thing. And like you say; an important choice for big projects as choosing a tool that a lot of people will be able to use. And Window is very big.
They’re smoking crack, anyway. A million lines per dev per month? When I’m doing major refactoring, a couple thousand lines per week in the same language, mostly moving existing stuff into a new home, is a substantial change. Three orders of magnitude more with a major language conversion? Get out of here.
The Centos “eight pointed star”?


Menu bar at the top at least makes some sense - it’s easier to mouse to it, since you can’t go too far. Having menus per-window like Linux, or like Windows used to before big ugly ribbons became the thing, is easier to overshoot. (Which is why I always open my menu bars by pressing ‘alt’ with my left thumb, and then using the keyboard shortcuts that are helpfully underlined. Window likes to hide those from you now since they’re ‘ugly’, and also makes you mouse over the pretty icons to get the tooltip that tells you what they are, which is just a PITA. Pretty != usable.)
Mac OS has had the menu at the top since before it was a multitasking OS. They had them there on the first Mac I ever used, a Mac Classic 2 back in 1991 or so, and it was probably like that before then too. It’s not like they’ve been ‘innovating’ that particular feature and annoying their users.


The actual fix is probably ‘enable mixed ASCII / Windows-1252 calls to Windows UTF-16 functions’, when some strings have different codepages to others’, or something silly. But that fix sounds better.
A rising tide lifts all boats - every improvement is welcome


I had 32GB of RAM in my desktop as 4x8GB; one of the sticks failed a couple of years ago, and it was cheaper to replace it with 64GB = 4x16GB than it was to get a replacement 8GB.
That’s convenient for work purposes (in fact, I could actually do with more) but massive pointless overkill for most games. Even games which do “big loads” - Witcher 3, say - aren’t noticeably quicker from RAM cache than they are off of an NVMe drive.
Generally, companies are trying to maximise profit, which means that the price will be reduced only when it’s stopped selling at the previous and they want to make sales the next, more price-conscious, segment of the market. They might want some quick bucks if the company is in financial trouble, or to ‘make the news’ with a sale if they need some publicity.
BG3 sold shedloads, is still selling shedloads, was on multiple games-of-the-year list and generally ranks amongst the best games of all time, often at the top; and Larian seem sufficiently flush with cash from the success of it. So like you say, don’t hold your breath waiting for a big sale, it doesn’t make sense for them to do that.


Data centre GPUs tend not to have video outputs, and have power (and active cooling!) requirements in the “several kW” range. You might be able to snag one for work, if you work at a university or at somewhere that does a lot of 3D rendering - I’m thinking someone like Pixar. They are not the most convenient or useful things for a home build.
When the bubble bursts, they will mostly be used for creating a small mountain of e-waste, since the infrastructure to even switch them on costs more than the value they could ever bring.


There’s times when I want to find “exact matches and nothing but” - searching for error messages, for instance - and that’s made much harder than it should be by AI bullshit search engines that don’t want you to switch off their “helpful” features. Considering moving to Kagi instead.
Mine was my local Forgejo server, NAS server, DHCP -> DNS server for ad blocking on devices connected to the network, torrent server, syncthing server for mobile phone backup, and Arch Linux proxy, since I’ve a couple of machines that basically pull the same updates as each other.
I’ve retired it in favour of a mini PC, so it’s back to being a RetroPie server, have loads of old games available in the spare room for when we have a party, amuses children of all ages.
They’re quite capable machines. If they weren’t so I/O limited, they’d be amazing. They tend to max out at 10 megabyte/second on SD card or over USB / ethernet. If you don’t need a faster disk than that, they’re likely to be ideal in the role.


No unexpected crashes, no game breaking bugs. Performance was… dubious. It looks amazing, but UE5 has scalability issues. None of the graphics options seemed to do anything for frame count.


The studio is mostly ex-Ubisoft employees. So yeah, it’s their first game as that studio, but they’re by no means novice developers. Fair play to them for following their passion though, it’s paid off.


systemd-networkd gets installed by default by Arch, integrates a bit better with the rest of SystemD, doesn’t have so many VPN surprises, and the configuration is a bit more obvious to me - a few config files rather than NetworkManager’s “loads of scripts” approach. Small niggles rather than big issues.
Really, I just don’t want duplication of services - more stuff to keep up-to-date. And if I’ve got SystemD anyway, might as well use it…


NetworkManager dependencies can now be disabled at build time…
Nice. It was a damned nuisance that Cinnamon brought its own network stack with it. All my headless servers and my Plasma gaming desktop use systemd-networkd, which meant that my Cinnamon laptop needed different configuration. Now they can all be the same.
Hopefully the new release will bash a few of the remaining Wayland bugs; Plasma is great but I prefer Cinnamon for work, and it’s just too buggy for gaming on a multi-monitor setup at the moment.


Especially since any version of Git from the last view years has a passionate hatred of symlinks for this reason, which is a bit annoying if you’ve a legit usecase. They’re either very out-of-date, or have done some very foolish customisation…


HDMI -> DP might be viable, since DP is ‘simpler’.
Supporting HDMI means supporting a whole pile of bullshit, however - lots of handshakes. The ‘HDMI splitters’ that you can get on eg. Alibaba (which also defeat HDCP) are active, powered things, and tend to get a bit expensive for high resolution / refresh.
Steam Machine is already been closely inspected for price. Adding a fifty dollar dongle into the package is probably out of the question, especially a ‘spec non-compliant’ one.


I’m going to guess it would require kernel support, but certainly graphics card driver support. AMD and Intel not so difficult, just patch and recompile; NVIDIA’s binary blob ha ha fat chance. Stick it in a repo somewhere outside of the zone of copyright control, add it to your package manager, boom, done.
I bet it’s not even much code. A struct or two that map the contents of the 2.1 handshake, and an extension to a switch statement that says what to do if it comes down the wire.


Python tkinter interfaces might be inefficient, slow and require labyrinthine code to set-up and use, but they make up for it by being breathtakingly ugly.
Yeah. You know the first time you install Arch (btw), and you realise you’ve not installed a working network stack, so you need to reboot from the install media, remount your drives, and pacstrap the stuff you forgot on again? Takes, like, three minutes every time? Imagine that, but you’ve got a kernel compile as well, so it takes about half an hour.
Getting Gentoo so that it’ll boot to a useful command line took me a few hours. Worthwhile learning experience, understand how boot / the initramfs / init and the core utilities all work together. Compiling the kernel is actually quite easy; understanding all the options is probably a lifetime’s work, but the defaults are okay. Setting some build flags and building ‘Linux core’ is just a matter of watching it rattle by, doesn’t take long.
Compiling a desktop environment, especially a web browser, takes hours, and at the end, you end up with a system with no noticeable performance improvements over just installing prebuilt binaries from elsewhere.
Unless you’re preparing Linux for eg. embedded, and you need to account for basically every byte, or perhaps you’re just super-paranoid and don’t want any pre-built binaries at all, then the benefits of Gentoo aren’t all that compelling.
It’s the Witcher 1 but redone in the Witcher 3 engine. They’ve reimplemented the combat rhythm minigame and the ‘sex cards’ are all in HD.