

500°C would be way above the safe operating temps, but most likely yes.


500°C would be way above the safe operating temps, but most likely yes.


Server memory is probably reusable, though likely to be either soldered and/or ECC modules. But a soldering iron and someone sufficiently smart can probably do it (if it isn’t directly usable).


My experience, having actually tried this on a huge codebase: my time was better spent looking at file names and reading source code myself to answer specific questions about the code.
Using it to read a single file or a few of them might go better. If you can find the right files first, you might get decent output.


Spouting bullshit? If so, I agree.
Codebases in the 100k+ to 1m+ sloc can be very difficult for a LLM (or human) to answer detailed questions about. A LLM might be able to point you in the right direction, but they don’t have enough context size to fit the code, let enough the capability to actually analyze it. Summarize? Sure, but it can only summarize what it has in context.


TLDR: data is something you collect over time from users, so you shouldn’t let the contracts for it mindlessly drift, or you might render old data unusable. Keeping those contracts in one place helps keep them organized.
But that explanation sucks if you’re actually five, so I asked ChatGPT to do that explanation for you since that would be hard for me to do:
Here’s a super-simple, “explain it like I’m 5” version of what that idea is trying to say:
🧠 Imagine your toys
You have a bunch of toys in your room — cars, blocks, stuffed animals.
Now imagine this:
You put some cars in the toybox.
You leave other cars on the floor in another place.
You keep some blocks in a bucket… and some blocks on the shelf.
And every time you want a toy, you have to run to a different spot to find its matching pieces.
That would be really confusing and hard to play with, right? Because things are spread out in too many places for no good reason.
🚧 What the blog is really warning about
In software (computer programs), “state” is like where toys are stored — it’s important information the program keeps track of. For example, it could be “what level I’m on in a game” or “what’s in my cart when I shop online.”
The article says the biggest mistake in software architecture is:
Moving that important stuff around too much or putting it in too many places when you don’t need to.
That makes the program really hard to understand and work with, just like your toys would be if they were scattered all over the place. (programming.dev)
🎯 Why that matters
If the important stuff is all over the place:
People get confused.
It’s harder to fix mistakes.
The program gets slower and more complicated for no reason.
So the lesson is:
👉 Keep the important information in simple, predictable places, and don’t spread it around unless you really need to. (programming.dev)


We’re postponing the announced billing change for self-hosted GitHub Actions to take time to re-evaluate our approach.


open to any feedback, always willing to learn
A common pattern with executable Python scripts is to:
!/usr/bin/env python3) to make it easier to execute__name__ == "__main__" before running any of the script so the functions can be imported into another script without running all the code at the bottom

Any website using CSR only can’t have a RCE because the code runs on the client. Any code capable of RSC that runs server and client side may be vulnerable.
From what I’ve seen, the exploit is a special request from a client that functionally lets you exec anything you want (via Function’s constructor). If your server is unpatched and recognizes the request, it may be (likely is) vulnerable.
I’m sure we’ll get more details over time and tools to manually check if a site is compromised.


I’m not the one recommending it lol.
If I had to guess, it’s to improve page performance by prerendering as much as possible, but I find it overkill and prefer to just prerender as much of the page as I can at build time and do CSR for the rest, though this doesn’t work if you have dynamic routes or some kind of server-side logic (good for blogs and such though).


I think their point was that CSR-only sites would be unaffected, which should be true. Exploiting it on a static site, for example, couldn’t be RCE because the untrusted code is only being executed on the client side (and therefore is not remote).
Now, most people use, or at least are recommended to use, SSR/RSC these days. Many frameworks make SSR enabled by default. But using raw React with no Next.js, react-router, etc. to create a client-side only site does likely protect you from this vulnerability.


30 is assuming you write code for all 30 days. In practice, it’s closer to 20, so 75 tests per day. It’s doable on some days for sure (if we include parameterized tests), but I don’t strictly write code everyday either.
Still, I agree with them that you generally want to write a lot of tests, but volume is less important than quality and thoroughness. The author using the volume alone as a meaningful metric is nonsense.
This is more likely the actual incident report:
A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
Edit: If you like reading


1500 tests is a lot. That doesn’t mean anything if the tests aren’t testing the right thing.
My experience was that it generates tests for the sake of generating them. Some are good. Many are useless. Without a good understanding of what it’s generating, you have no way of knowing which are good and which are useless.
It ended up being faster for me to just learn the testing libraries and write my own tests. That way I was sure every test served a purpose and tested the right thing.


I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.
From my experience working with people who use them heavily, they introduce new ways of accumulating tech debt. Those projects usually end up having essays of feature spec docs, prompts, state files (all in prose of course), etc. Those files are anywhere from hundreds to thousands of lines long, and there’s a lot of them. There’s no way anybody is spending hours reading through enough markdown to fill twenty encyclopedia-sized books just to make sure it’s all up-to-date. At least, I can promise that I won’t be doing it, nor will anyone I know (including those using AI this way).


And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.
My favorite is when it generates a tree of the files in a directory in a README and a description for each file. How the fuck is this useful? Files will be added and removed, so there’s now an additional task to update these docs whenever that happens. Nobody will remember to do so because no tool is going to enforce that and it’s stupid anyway.
Sure, document high level directories. But do you really need that all in the top level README?
But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!
Nothing to add. Just quoting this section because it needs to be highlighted lol.


Why not?
Are you asking the author or people in general? If the author didn’t answer “why not” for you, then I can.
Yes, I’ve used Claude. Let’s skip that part.
If you don’t know how to write or identify defensive code, you can’t know if the LLM generated defensive code. So in order for a LLM to be trusted to generate defensive code, it needs to do so 100% of the time, or very close to that.
You seem to be under the impression that Claude does so, but you presumably can tell if code is written with sufficient guards and tests. You know to ask the LLM to evaluate and revise the code. Someone without experience will not know to ask that.
Speaking now from my experience, after using Claude for work to write tests, I came out of that project with no additional experience writing tests. I had to do another personal project after that to learn the testing library we used. Had that work project given me sufficient time to actually do the work, I’d have spent some time learning the testing library we used. That was unfortunately not the case.
The tests Claude generated were too rigid. It didn’t test important functionality of the software. It tested exact inputs/outputs using localized output values, meaning changing localizations was potentially enough to break tests. It tested cases that didn’t need to be tested, like whether certain dependency calls were done in a specific order (those calls were done in parallel anyway). It wrote some good tests, but a lot of additional tests that weren’t needed, and skipped some tests that were needed.
As a tool to help someone who already knows what they’re doing, it can be useful. It’s not a good tool for people who don’t know what they’re doing.


Homepage:
A language compiled to Bash.
Also:
A modern, type-safe programming language that catches bugs and errors at compile time.
Mixins are composition! They don’t describe what a type is (“circle” is a “shape”, etc) but rather what they can do (“circle” can have its area calculated, it can be drawn, it can be serialized, etc). Mixins in Python just so happen to be implemented by adding base classes.
Inheritance itself isn’t really a problem. It usually only matters when you have unnecessarily deep hierarchies, where a change in a base class can change functionality in dozens of classes in an unintentional way. Similarly, it can add complexity once the hierarchy is deep enough, but only really if you throw too much into the base classes.
Python’s ABCs are more of interfaces though, which is why despite Python using base classes to “inherit” them, a lot of that is really composition (or putting a class together from parts) rather than inheriting and overriding implementation details from a parent/grandparent/etc type.
OOP debates usually turn into inheritance vs composition which is weird because every modern used language has objects and most OOP languages lean towards composition these days.
The core OOP concepts are universal and important.
Yes, actually. Data centers are designed to cool down components pretty efficiently. They aren’t cooking the RAM at 500°C.