

Well, if I’m not, then neither is an LLM.
But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.
Developer and refugee from Reddit


Well, if I’m not, then neither is an LLM.
But for most projects built with modern tooling, the documentation is fine, and they mostly have simple CLIs for scaffolding a new application.


Yeah, I have never spent “days” setting anything up. Anyone who can’t do it without spending “days” struggling with it is not reading the documentation.


Sadly, there are some who don’t even know it, because they’re buying services from someone else that buys them from someone else that buys them from Amazon. So they’re currently wondering what the fuck is even going on, since they thought they weren’t using AWS.


I’m a software developer and my company is piloting the use of LLMs via Copilot right now. All of them suck to varying degrees, but everyone’s consensus is that GPT5 is the worst of them. (To be fair, no one has tested Grok, but that’s because no one in the company wants to.)


On top of that, there’s so much AI slop all over the internet now that the training for their models is going to get worse, not better.


They’ll ask their parents, or look up cooking instructions on actual websites.


Venture capital drying up.
Here’s the thing… No LLM provider’s business is making a profit. None of them. Not OpenAI. Not Anthropic. Not even Google (they’re profitable in other areas, obviously). OpenAI optimistically believes it might start being profitable in 2029.
What’s keeping them afloat? Venture capital. And what happens when those investors decide to stop throwing good money after bad?
BOOM.


There are tricks to getting better output from it, especially if you’re using Copilot in VS Code and your employer is paying for access to models, but it’s still asking for trouble if you’re not extremely careful, extremely detailed, and extremely precise with your prompts.
And even then it absolutely will fuck up. If it actually succeeds at building something that technically works, you’ll spend considerable time afterwards going through its output and removing unnecessary crap it added, fixing duplications, securing insecure garbage, removing mocks (God… So many fucking mocks), and so on.
I think about what my employer is spending on it a lot. It can’t possibly be worth it.


Yeah, code bloat with LLMs is fucking monstrous. If you use them, get used to immediately scouring your code for duplications.
It always is with these guys.


After working on a team that uses LLMs in agentic mode for almost a year, I’d say this is probably accurate.
Most of the work at this point for a big chunk of the team is trying to figure out prompts that will make it do what they want, without producing any user-facing results at all. The rest of us will use it to generate small bits of code, such as one-off scripts to accomplish a specific task - the only area where it’s actually useful.
The shine wears off quickly after the fourth or fifth time it “finishes” a feature by mocking data because so many publicly facing repos it trained on have mock data in them so it thinks that’s useful.
In our case, there are enough upper management folks who are opposed to it that I doubt it will last or ever be enforced. For people like me, it really doesn’t make any sense to enforce it in the first place, because all of my teammates are in other states and countries.
Making me go to the office just means you can’t schedule early meetings with me, because I’ll be commuting during that time.
My office just did the same thing. And the backlash is enormous. No one wants it. No one likes it.
The funny thing is that I’m actually an Arch user. I’m just not a dick about it.
Yeah, this sucks. Use the distro you like, people.


That’s how it’s done, yep.


Not the person you asked, but I do that sometimes. For instance, when I want to watch a specific video but I don’t want having watched it to affect other video recommendations.


You’re right, unit tests are another area where they can be helpful, as long as you’re very careful to check them over.


Actually, there’s growing evidence that beyond a certain point, more context drastically reduces their performance and accuracy.
I’m of the opinion that LLMs will need a drastic rethink before they can reach the point you describe.
The thing is, it really won’t. The context window isn’t large enough, especially for a decently-sized application, and that seems to be a fundamental limitation. Make the context window too large, and the LLM gets massively offtrack very easily, because there’s too much in it to distract it.
And LLMs don’t remember anything. The next time you interact with it and put the whole codebase into its context window again, it won’t know what it did before, even if the last session was ten minutes ago. That’s why they so frequently create bloat.