

If the ferrite is filtering a hum you can hear, it’s also filtering parts of your music that you can hear because a ferrite just dampens a frequency range and can’t tell what is and isn’t supposed to be there.


If the ferrite is filtering a hum you can hear, it’s also filtering parts of your music that you can hear because a ferrite just dampens a frequency range and can’t tell what is and isn’t supposed to be there.


There’s a pretty good reason to think it’s not going to improve much. The size of models and amount of compute and training data required to create them is increasing much faster than their performance is increasing, and they’re already putting serious strain on the world’s ability to build and power computers, and the world’s ability to get human-written text into training sets (hence why so many sites are having to deploy things like Anubis to keep themselves functioning). The levers AI companies have access to are already pulled as far as they can go, and so the slowing of improvement can only increase, and the returns can only diminish faster.


If LLMs aren’t going to reach a point where they outperform a junior developer who needs too much micromanaging to be a net gain to productivity, then AI’s not going to be a net gain to productivity, and the only productive way to use it is to fight its adoption, much like the only way to productively use keyboards that had a bunch of the letters missing would be to refuse to use them. It’s not worth worrying about obsolescence until such a time as there’s some evidence that they’re likely to be better, just like how it wasn’t worth worrying about obsolescence yet when neural nets were being worked on in the 80s.


Usually, having to wrangle a junior developer takes a senior more time than doing the junior’s job themselves. The problem grows the more juniors they’re responsible for, so having LLMs stimulate a fleet of junior developers will be a massive time sink and not faster than doing everything themselves. With real juniors, though, this can still be worthwhile, as eventually they’ll learn, and then require much less supervision and become a net positive. LLMs do not learn once they’re deployed, though, so the only way they get better is if a cleverer model is created that can stimulate a mid-level developer, and so far, the diminishing returns of progressively larger and larger models makes it seem pretty likely that something based on LLMs won’t be enough.


The vast majority of NATO that isn’t the US is covered by the EU’s Mutual Defence Clause, so this kind of already exists. It sucks for the NATO members that aren’t in the EU, though, e.g. Greenland.


Enzymes are specific to a particular molecule, or class of molecules with a particular pattern. A PEI buildplate is not getting eaten by the proteases in a dishwasher tablet. The reasons you’re not supposed to rinse things before putting them in the dishwasher are:
I think it was pretty reasonable of them to worry - lots of people who don’t like spending unnecessary money also don’t like spending not-obviously-necessary money on safety equipment, and there’s plenty of material on the internet that would imply resin printing is completely safe as long as you don’t drink the stuff. Resin printing with woefully inadequate ventilation/PPE is really common, so it’s a pretty safe bet that anyone asking questions is probably also doing something unsafe without realising it, especially as resin not liking the cold is something a lot of people learn about fairly early on (unless they live somewhere where it never gets below 20°C).


To be fair, if I had all that money, I’d probably just pay someone to figure out how to make it do the most good, and continue spending at least some of my time shitposting. It’s okay to have hobbies, but it’s bad to hoard the money or invest it in evil.


I didn’t say that they did, just that switching to UE5 can be a mixed bag rather than always unambiguously better. My original comment was pretty explicit about it not being applicable to CDPR.


If you’re specifically working on a game that stock UE5 can’t do, e.g. you need to make the kind of far-reaching changes that Valve had to make to Source to make Portal possible, you end up with most of those problems even if you’re doing it by modifying UE5 rather than modifying your in-house engine. You’re still ending up with a custom engine at the end of the process and still need to make tooling for it and onboard everyone, even if it ends up fairly similar to stock UE5 due to being modified UE5. It doesn’t necessarily work out any more scalable or sustainable than modifying an in-house engine once you’re making this kind of change. The outcome ends up being that games that can’t be made in close-to-stock UE5 just don’t end up getting made.


It can be more of a mixed bag than that, though. If your employee retention and training is good enough that you have plenty of people who wrote the engine or at least understand it really well (which doesn’t seem to be the case at CDPR since Cyberpunk’s crunch), it can be much faster to alter it than to figure out the equivalent guts of Unreal Engine. That won’t end up making a difference if you stick to well-trodden paths that lots of games from lots of studios use, but if you want to do something that Unreal doesn’t support out of the box, it can be quite hard to wrangle.


CUDA is an Nvidia technology and they’ve gone out of their way to make it difficult for a competitor to come up with a compatible implementation. With cross-vendor alternatives like OpenCL and compute shaders, they’ve not put resources into achieving performance parity, so if you write something in both CUDA and OpenCL, and run them both on an Nvidia card, the CUDA-based implementation will go way faster. Most projects prioritise the need to go fast above the need to work on hardware from more than one vendor. Fifteen years ago, an OpenCL-based compute application would run faster on an AMD card than a CUDA-based one would run on an Nvidia card, even if the Nvidia card was a chunk faster in gaming, so it’s not that CUDA’s inherently loads faster. That didn’t give AMD a huge advantage in market share as not very much was going on that cared significantly about GPU compute.
Also, Nvidia have put a lot of resources over the last fifteen years into adding CUDA support to other people’s projects, so when things did start springing up that needed GPU compute, a lot of them already worked on Nvidia cards.


Generally, you’ll get better results by spending half as much on GPUs twice as often. Games generally aren’t made expecting all their players to have a current-gen top-of-the-line card, so you don’t benefit much from having a top-of-the-line card at first, and then a couple of generations later, usually there’s a card that outperforms the previous top-of-the-line card that costs half as much as it did, so you end up with a better card in the long run.
I’ve found this is really dependent on placement. If I put my libre a couple of centimeters away from the region I usually use, it’ll read low all night, but as long as I stick to the zone I’ve determined to be fine, it’ll agree with a blood test even if I’ve had pressure on it for ages. Also, the 3 is more forgiving than the 1 or 2 because it’s smaller than the older models, so affects how much the skin bends and squishes less.


Plenty of TVs are capable of radioing your neighbour’s TV and piggybacking off their internet connection, so if it’s not in a Faraday cage, it might be overconfident to say it’s never been connected to a network.


It wouldn’t be Rocko’s Baselisk if it didn’t do things that hurt.
He might just have digestive issues and really good balance. Maybe a hovercraft skirt sewn onto the seat of his pants.


It might also be completely unusable if it’s going to be touched by human hands, as hands get sweaty, and sweat is salty water.


Obviously, most people don’t replace their TV every year, so it was years after new sales were mostly LCDs that most people had LCDs, but companies making content like to be sure it looks good with the latest screens.
Fillets are easier to print horizontally than chamfers as they spread the acceleration (i.e. the thing that makes sharp corners bad) over the while fillet instead of just splitting it into two stages like a chamfer would.
Chamfers are easier to print vertically than fillets as the overhang is limited and consistent.
There’s no overhang for a horizontal corner as you’re printing the same shape onto the layer below, and no acceleration for a vertical corner as it’s entirely separate layers so the toolhead never has to follow the path of the corner.
It sounds like you’ve read (or only remembered) half a rule. It’s not the case that either half of the rule is used the majority of the time because 3D printers are used to print 3D objects, so they always produce objects with both horizontal and vertical edges.