

An LLM that hallucinates an API quickly finds out that it falls to work and is forced to retrieve the real API and fix the errors.
and that can result it in just fixing the errors, but not actually solving the problem, for example if the unit tests it writes afterwards test the wrong thing.








Finally I can point to a court case for why we need to stop referring to everything as social media.