Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?
Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.
Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.
Please don’t kill me


Exactly, it’s just another tool in the toolbox. And if we can use that tool to weed out the (sometimes hilariously bizarre) bad devs, I’m all for it.
I do have a concern for the health of the overall ecosystem though. Don’t all good devs start out as bad ones? There still needs to be a reasonable on-ramp for these people.
That’s a valid concern, but I really don’t think that we should equate new devs with seniors that are outright bad. Heck, I’ve worked with juniors that scared the hell out of me because they were so friggin good, and I’ve worked with “seniors” who didn’t want to do loops because looping = bad performance.