Probably why it crashed then
Ah yes trustworthy source 80.lv
Look into this source for like 2 seconds, it is a marketing research company, not journalists, with extremely suspicious and likely generated team. https://80.lv/contact-us#audience https://my.linkedin.com/in/arti-sergeev the “Head” of 80 level doesn’t even seem like a real person, definitely not a real picture.
I’d like to read the stackademic link without signing up.
I wish they’d replace the executives first.
Absolute fucking MORONS have taken over the world, and are wrecking it.
IF we ever get our country back, we have to go forward with a national strategy that we no longer have to be polite to treasonous MAGA scumbags. Everytime they open their mouths they should be shouted down with screams to shut the fuck up.
We should never have to tolerate the opinions of stupid, violent traitors.
MBAs are mortal enemies of software engineers. Couple that with what one former CEO of mine said: “engineers have very well tuned bullshit detectors” and you arrive at the problem…
We can send them to work camps like during the Depression.
They’ll all pretend they were never MAGA, why are you still going on about Trump, we’re trying to look forward, etc.
This is exactly how the GOP disowned the Bush II Administration with the Tea Party.
And that’s when we have to scream the loudest, and absolutely refuse to give their gaslighting any credibility at all. If you were a Republican during this era, then you are a MAGA Traitor, racist, rapist, incompetent, intolerant, violent, stupid, pedophile.
And you ALWAYS will be, forever.
Could give em the Hans Landa treatment
This happens with any disruptive tech. In the 80s, Old white CEOs were computerizing everything without understanding computers. 2025 every grey CEO throws around A1 without having the first clue how it works.
throws around A1 without having the first clue how it works.
Just open the bottle and pour it on your well-done steak?
I’ve been waiting for something like this so we can see who’s heads roll when AI fucks up. I figured we’d see doctors and lawyers losing their licenses first, but maybe it’ll be this. So, who shoulders the blame when a program that can’t learn from its mistakes fucks up a quarter of the internet?
Amazon has laid off or scared off the vast majority of their most experienced people. Those that weren’t laid off quit over stupidity like “RTO”. I don’t doubt that their underpaid junior staff and Kool-Aid drinking upper management decided that AI is a great way to replace all the lost knowledge and expertise. As with the downfall of civilization, this will get much worse before it gets better. It will be interesting to see how huge companies react to another companies enshittification actively damaging their business and reputation.
What a great idea to test this on paying customers’ live production websites.
I’d believe them if they said they tested small- scale locally. Even good software/designs can implode when they get scaled up, and I doubt this was good design or software.
I know there’s doubt as to the validity of the claims. I only want to say this: when “AI” takes jobs, who is there to plug things in to make the “AI” machine go?
Sounds like Amazon fucked around and found out… Allegedly.
A handful of senior engineers or developers. And then we’re even more ducked when they retire or die, because the no-one is hiring junior engineers or developers
The good ones leave as things turn to shit, or we’re laid off because their salaries were too high.

I really want this to be true, because not only I believe that would be the immediate outcome, but also because it would be hilarious.
But a somewhat credible source that’s not wrapped in “allegedly” and old stories would really help drive the point home.
More likely their hamfisted return to work layoff scheme has caused them to bleed experienced staff. The only people left are less experienced and give less of a shit. It took them over an hour to realize the database of their DNS system was the issue.
I asked a buddy that works at Amazon about the outage and he pointed me to this article.
https://www.theregister.com/2025/10/20/aws_outage_amazon_brain_drain_corey_quinn/
I know quite a few people who currently work there and pretty much all of them are trying to leave.
I wonder how long AI will function when it starts feeding on AI generated junk data. Like a photocopy of a photocopy.
This is a really good article. Thanks for posting it.
TLDR:
report had been published right before the outage, alleging that the company had laid off 40% of its DevOps team to replace them with AI. … there is a lot of skepticism around this article… although we do not claim it is true or is somehow connected to the systems’ crash.
Super-Duper-short-version:
No trustworthy data about the incident.
Amazon is laying off or has lost truly staggering numbers of experienced staff.
So it might not be AI, although my experience with AI suggests it’s right about 60% of the time and there’s no way I would let it implement it’s own recommendations anywhere near anything that earned me money.
It might just be cheaper, younger, newer staff making mistakes they don’t know how to fix.
https://www.theregister.com/2025/10/20/aws_outage_amazon_brain_drain_corey_quinn/
To be fair, this is 2025, there’s no trustworthy data about anything any more.
Hell, you might just be an AI. Or I might just be an AI. Or maybe there wasn’t even an outage. I didn’t notice any issues, so maybe AI hallucinated it, or it’s been made up as clickbait and been second-hand reported by thousands of news sites that don’t care about fact checking any more.
Not that there are facts any more, anyway.
The news could report on a non existent attack on US soul and suddenly trigger world war 2
DevOps cannot be automated away. So stupid.
DevOps is one of the most automated parts of software development and deployment actually.
Article seems like complete bullshit anyway.
Most of my work in DevOps isn’t in front of my text editor writing scripts. It’s spent hopping between dashboards, drafting emails, doing RCA, teaching dev team members how to use pipelines, and getting requirements from them for designing new pipelines. Then inevitably debating with them about design considerations when they ask for a set of procedures that won’t pan out.
Until your AI is a fully fledged team member who everyone can feel comfortable engaging with as if they were a real human, you cannot possibly begin to automate this.
Most of my work in DevOps isn’t in front of my text editor writing scripts.
Mine either - we use azure devops, octopus, etc - tools that automate devops. Most of devops is automated across the board - no big companies are manually kicking off builds for every PR and pushing the files around the place and then manually deploying them - it’s all automated using devops tools. Having AI build and manage these pipelines seems like a logical place to use it, as they are all just about creating steps using pieces from previous steps and other systems.
You absolutely could have AI create a pipeline to build, test, and deploy a solution, and then test the actual deployed solution. The AI is essentially just the coordinator here, tying together the other devops tools.
Devops is often figuring out why automation didn’t work.
Absolutely, but that’s not an argument against what i said.
DevOps is not executing the automation, but designing it. DevOps is not manually spinning up pods but writing the automation that does so.
Why do you think AI can’t easily be the supervisor for pipelines and create new ones? It’s basically just creating steps that are well known from building a branch to deploying it.
Tried to figure out yesterday why a user couldn’t ssh into a server, tried LLMs to figure it out, completely useless. Had to go into some log file somewhere to find out the one who set up the server made a specific group for ssh and if a user wasn’t in that group they couldn’t connect. The LLMs (ChatGPT and Gemini) gave me bullshit about changing flags in the sshd config…
Cool story. LLMs can’t see error messages in logs if you don’t give them access to it. Had you given the AI agent access to those files? Or were you just using a standalone LLM with no access to the system?
Easy fix: give an LLM root access to all production critical servers and allow everyone in the company to chat with it.
First reaction: fear.
Second: I chuckled. Because I thought of some VP-level enforcing this joke as SOP.
And then a little more fear, as a treat.
Haha if that happens it is time to get out fast
…noooooo, it most definitely isn’t.
While the job does deal heavily in automating things, it only automates Boolean things. Looking at a platform and seeing why and where it’s failing is not a Boolean thing, and never will be. It’s the same reason we still don’t have machines that repair cars over 100 years after their introduction.
Looking at a platform and seeing why and where it’s failing is not a Boolean thing, and never will be.
AI can see why and where it’s failing too if it has the appropriate permissions and access.
It’s the same reason we still don’t have machines that repair cars over 100 years after their introduction.
No it’s not. DevOps is all software, repairing cars is not. Car ECUs can tell you exactly what is wrong with your car.
Exactly. DevOps engineers are already super skilled at using automation where appropriate, but knowing how and when to do that is still an extremely human task
Automation with a lot of validation steps that are not very obvious. Because if they were, we’d have automated them away.
This report is a blog linking to a blog.
Thanks I didnt see that. Ill add it to the description.
The execs started buying into their own bullshit…







