Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?
Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.
Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.
Please don’t kill me


Slower?
Is getting a whole C# class unit tested in minutes slower, compared to setting up all the scaffolding, test data etc, possibly taking hours?
Is getting a React hook, with unit tests in minutes slower than looking up docs, hunting on Stack Overflow etc and slowly creating the code by hand over several hours?
Are you a dev yourself, and in that case, what’s your experience using LLM’S?
Yeah, generating test classes with AI is super fast. Just ask it, and within seconds it spits out full test classes with some test data and the tests are plenty, verbose and always green. Perfect for KPIs and for looking cool. Hey, look at me, I generated 100% coverage tests!
Do these tests reflect reality? Is the test data plausible in the context? Are the tests easy to maintain? Who cares, that’s all the next guy’s problem, because when that blows up the original programmer will likely have moved on already.
Good tests are part of the documentation. They show how a class/method/flow is used. They use realistic test data that shows what kind of data you can expect in real-world usage. They anticipate problems caused due to future refactorings and allow future programmers to reliably test their code after a refactoring.
At the same time they need to be concise and non-verbose enough that modifying the tests for future changes is simple and doesn’t take longer than the implementation of the change. Tests are code, so the metric of “lines of code are a cost factor, so fewer lines is better” counts here as well. It’s a big folly to believe that more test lines is better.
So if your goal is to fulfil KPIs and you really don’t care whether the tests make any sense at all, then AI is great. Same goes for documentation. If you just want to fulfil the “every thing needs to be documented” KPI and you really don’t care about the quality of the documentation, go ahead and use AI.
Just know that what you are creating is low-quality cost factors and technical debt. Don’t be proud of creating shitty work that someone else will have to suffer through in the future.
Has anyone even read here that I read every line of code, making sure that they’re all correct? I do also make sure that all tests are relevant, using relevant data and I make sure that the result of each test is correctly asserted.
No one would ever be able to tell what tools I used to create my code, it always passes the code reviews.
Why all the vitriol?
Responding just to the “Why all the vitriol?” portion:
Most people do not like the idea of getting fired and replaced by a machine they think cannot do their job well, but that can produce a prototype that fools upper management into thinking it can do everything the people can but better and cheaper. Especially if they liked their job (8 hours doing something you like vs losing that job and having to do 8 hours on something you don’t like daily, yes many people do that already but if you did not have to deal with that shittiness it’s tough to swallow) or got into it because they thought it would be a secure bet as opposed to art or something, only to have that security taken away (yes, you can still code at home for free with whatever tools you like and without the ones you do not, but most people need a job to live, and most people here probably prefer having a dev job that pays, even if there is crunch, than working retail or other low-status low-paying high-shittiness jobs that deal with the public).
And if you do not want the upper management to fire you, you definitely don’t want to give any credit towards the idea of using this at work, and want to make any amount of warmth for it something unpopular to engage in, hoping the popular sentiment sways the minds of upper management just like they think pro-AI hype has.
As much as I’m anti-AI I can also acknowledge my own biases:
I’d also imagine most of us find generating our own code by our own hand fun, but reviewing others’ boring, and most devs probably do not want to stop being code writers and start being AI’s QA. Or to be kicked out of tech unless they rely on this technology they don’t trust. I trust deterministic outputs and know if it fucks up there is probably a bug I can go back and fix; with generative outputs determined by a machine (as opposed to human-generated things that have also been filtered by their real-life experience and not just what they saw written online) I really don’t, so I’d never use LLMs for anything I need to trust.
People are absolutely going to get heated over this because if it gets Big and the flaws ironed out, it’ll probably be used not to help us little people have more efficient and cheaper things, less time on drudgery and more time on things we like, but at least to try to put us the devs on programming.dev out of a job and eventually the rest of us the working people out of a job too because we’re an expensive line item, and we have little faith that the current system will adjust with (the hypothetical future) rising unemployment-due-to-AI to help us keep a non-dystopian standard of living. Poor peoples’ situation getting worse, previously-comfortable people starting to slide towards poverty… automation that threatens jobs that seems to be being pushed by big companies and rich people with lots of resources during a time of rising class tension is sure to invite civilized discussions with zero vitriol for people who have anything positive to say about that form of automation.
I find it interesting that all these low participation/new accounts have come out of the woodwork to pump up AI in the last 2 weeks. I’m so sick of having this slop clogging up my feed. You’re literally saying that your vibes are more important than actual data, just like all the others. I’m sorry, but its not.
My experience btw, is that llms produce hot garbage that takes longer to fix than if I wrote it myself, and all the people that say “but it writes my unit tests for me!” are submitting garbage unit tests, that often don’t even exercise the code, and are needlessly difficult to maintain. I happen to think tests are just as important as production code so it upsets me.
The biggest thing that the meteoric rise of developers using LLMs has done for me is confirm just how many people in this field are fucking terrible at their jobs.
“just how many people are fucking terrible at their jobs”.
Apparently so. When I review mathematics software it’s clear that non-mathematicians have no clue what they are doing. Many of them are subtlely broken, they use either trivial algorithms or extremely inefficient implementations of sophisticated algorithms (e.g trial division tends to be the most efficient factorization algorithm because they can’t implement anything else efficiently or correctly).
The only difference I’ve noticed with the rise of LLM coding is that more exotic functions tend to be implemented, completely ignoring it’s applicability. e.g using the Riemann Zeta function to prove primality of an integer, even though this is both very inefficient and floating-point accuracy renders it useless for nearly all 64-bit integers.
Have you read anything I’ve written on how I use LLM’s? Hot garbage? When’s the last time you actually used one?
Here are some studies to counter your vibes argument.
55.8% faster: https://arxiv.org/abs/2302.06590
These ones indicate positive effects: https://arxiv.org/abs/2410.12944 https://arxiv.org/abs/2509.19708