

Agents now can run compilation and testing on their own so the hallucination problem is largely irrelevant. An LLM that hallucinates an API quickly finds out that it falls to work and is forced to retrieve the real API and fix the errors. So it really doesn’t matter anymore. The code you wind up with will ultimately work.
The only real question you need to answer yourself is whether or not the tests it generates are appropriate. Then maybe spend some time refactoring for clarity and extensibility.

I think it’ll just mean they they start their careers involved in higher level concerns. It’s not like this is the first time that’s happened. Programming even just prior to the release of LLM agents was completely different from programming 30 years ago. Programmers have been automating junior jobs away for decades and the industry has only grown. Because the fact of the matter is that cheaper software, at least so far, has just created more demand for it. Maybe it’ll be saturated one day. But I don’t think today’s that day.