

Responding just to the “Why all the vitriol?” portion:
Most people do not like the idea of getting fired and replaced by a machine they think cannot do their job well, but that can produce a prototype that fools upper management into thinking it can do everything the people can but better and cheaper. Especially if they liked their job (8 hours doing something you like vs losing that job and having to do 8 hours on something you don’t like daily, yes many people do that already but if you did not have to deal with that shittiness it’s tough to swallow) or got into it because they thought it would be a secure bet as opposed to art or something, only to have that security taken away (yes, you can still code at home for free with whatever tools you like and without the ones you do not, but most people need a job to live, and most people here probably prefer having a dev job that pays, even if there is crunch, than working retail or other low-status low-paying high-shittiness jobs that deal with the public).
And if you do not want the upper management to fire you, you definitely don’t want to give any credit towards the idea of using this at work, and want to make any amount of warmth for it something unpopular to engage in, hoping the popular sentiment sways the minds of upper management just like they think pro-AI hype has.
As much as I’m anti-AI I can also acknowledge my own biases:
It is difficult to get a man to understand something, when his salary depends on his not understanding it.
I’d also imagine most of us find generating our own code by our own hand fun, but reviewing others’ boring, and most devs probably do not want to stop being code writers and start being AI’s QA. Or to be kicked out of tech unless they rely on this technology they don’t trust. I trust deterministic outputs and know if it fucks up there is probably a bug I can go back and fix; with generative outputs determined by a machine (as opposed to human-generated things that have also been filtered by their real-life experience and not just what they saw written online) I really don’t, so I’d never use LLMs for anything I need to trust.
People are absolutely going to get heated over this because if it gets Big and the flaws ironed out, it’ll probably be used not to help us little people have more efficient and cheaper things, less time on drudgery and more time on things we like, but at least to try to put us the devs on programming.dev out of a job and eventually the rest of us the working people out of a job too because we’re an expensive line item, and we have little faith that the current system will adjust with (the hypothetical future) rising unemployment-due-to-AI to help us keep a non-dystopian standard of living. Poor peoples’ situation getting worse, previously-comfortable people starting to slide towards poverty… automation that threatens jobs that seems to be being pushed by big companies and rich people with lots of resources during a time of rising class tension is sure to invite civilized discussions with zero vitriol for people who have anything positive to say about that form of automation.



I think it’s both.
It sits at the fast and cheap end of “pick three: fast, good, and cheap” and society is trending towards “fast and cheap” to the exclusion of “good” to the point it is getting harder and harder to find “good” at all sometimes.
People who care about the “good” bit are upset, people who want to see stock line go up in the short term without caring about long term consequences keep riding the “always pick fast and cheap” and are impressed by the prototypes LLMs can pump out. So devs get fired because LLMs are faster and cheaper, even if they hallucinate and cause tons of tech debt. Move fast and break things.
Some devs that keep their jobs might use LLMs. Maybe they accurately assessed what they are trying to outsource to LLMs is so low-skill that even something that does not hit “good” could do it right (and that when it screws up they could verify the mistake and fix it quickly); so they only have to care about “fast and cheap”. Maybe they just want the convenience and are prioritizing “fast and cheap” when they really do need to consider “good”. Bad devs exist too and I am sure we have all seen incompetent people stay employed despite the trouble they cause for others.
So as much as this looked at first, to me, like the thing where fascists simultaneously portray opponents as weak (pathetic! we deserve to triumph over them and beat their faces in for their weakness) and strong (big threat, must defeat!), I think that’s not exactly what anti-AI folks are doing here. Not doublethink but just seeing everyone pick “fast and cheap” and noticing its consequences. Which does easily map onto portraying AI as weak, pointing out all the mistakes it makes and not replacing humans well; while also portraying it as strong, pointing out that people keep trying to replace humans with AI and that it’s being aggressively pushed at us. There are other things in real life that map onto a simultaneous portrayal as weak and strong: the roach. A baby taking its first steps can accidentally crush a roach, hell if the baby fell on many roaches the roaches all die (weak), but it’s also super hard to end an infestation of them (strong). It is worth checking for doublethink when you see the pattern of “simultaneously weak and strong,” but that is also just how an honest evaluation of a particular situation can end up.